From ping@lfw.org  Fri Sep  1 00:16:55 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 18:16:55 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.50976.102853.695767@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Charles G Waldman wrote:
> Alas, even after fixing this, I *still* can't get linuxaudiodev to
> play the damned .au file.  It works fine for the .wav formats.
> 
> I'll continue hacking on this as time permits.

Just so you know -- i was definitely able to get this to work at
some point before when we were trying to fix this.  I changed
test_linuxaudiodev and it played the .AU file correctly.  I haven't
had time to survey what the state of the various modules is now,
though -- i'll have a look around and see what's going on.

Side note: is there a well-defined platform-independent sound
interface we should be conforming to?  It would be nice to have a
single Python function for each of the following things:

    1. Play a .wav file given its filename.

    2. Play a .au file given its filename.

    3. Play some raw audio data, given a string of bytes and a
       sampling rate.

which would work on as many platforms as possible with the same command.

A quick glance at audiodev.py shows that it seems to support only
Sun and SGI.  Should it be extended?

If someone's already in charge of this and knows what's up, let me know.
I'm sorry if this is common knowledge of which i was just unaware.



-- ?!ng



From bwarsaw@beopen.com  Fri Sep  1 00:22:53 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 19:22:53 -0400 (EDT)
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AEBD4A.55ABED9E@per.dem.csiro.au>
 <39AE07FF.478F413@per.dem.csiro.au>
 <14766.14278.609327.610929@anthem.concentric.net>
 <39AEBD01.601F7A83@per.dem.csiro.au>
Message-ID: <14766.59597.713039.633184@anthem.concentric.net>

>>>>> "MF" == Mark Favas <m.favas@per.dem.csiro.au> writes:

    MF> Close, but no cigar - fixes the miscalculation of BE_MAGIC,
    MF> but "magic" is still read from the .mo file as
    MF> 0xffffffff950412de (the 64-bit rep of the 32-bit negative
    MF> integer 0x950412de)

Thanks to a quick chat with Tim, who is always quick to grasp the meat
of the issue, we realize we need to & 0xffffffff all the 32 bit
unsigned ints we're reading out of the .mo files.  I'll work out a
patch, and check it in after a test on 32-bit Linux.  Watch for it,
and please try it out on your box.

Thanks,
-Barry


From gstein@lyra.org  Fri Sep  1 02:51:04 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 18:51:04 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <14766.65024.122762.332972@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 31, 2000 at 08:53:20PM -0400
References: <200009010002.RAA23432@slayer.i.sourceforge.net> <14766.65024.122762.332972@bitdiddle.concentric.net>
Message-ID: <20000831185103.D3278@lyra.org>

On Thu, Aug 31, 2000 at 08:53:20PM -0400, Jeremy Hylton wrote:
> Any opinion on whether the Py_SetRecursionLimit should do sanity
> checking on its arguments?

-1 ... it's an advanced function. It's the caller's problem if they monkey
it up.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From gstein@lyra.org  Fri Sep  1 03:12:08 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 19:12:08 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <200009010002.RAA23432@slayer.i.sourceforge.net>; from tim_one@users.sourceforge.net on Thu, Aug 31, 2000 at 05:02:01PM -0700
References: <200009010002.RAA23432@slayer.i.sourceforge.net>
Message-ID: <20000831191208.G3278@lyra.org>

On Thu, Aug 31, 2000 at 05:02:01PM -0700, Tim Peters wrote:
> Update of /cvsroot/python/python/dist/src/Python
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv20859/python/dist/src/Python
> 
> Modified Files:
> 	ceval.c 
> Log Message:
> Supply missing prototypes for new Py_{Get,Set}RecursionLimit; fixes compiler wngs;
> un-analize Get's definition ("void" is needed only in declarations, not defns, &
> is generally considered bad style in the latter).

wtf? Placing a void in both declaration *and* definition is "good style".

static int foo(void) { ... }
int bar() { ... }

You're setting yourself up for inconsistency if you don't always use a
prototypical definition. In the above example, foo() must be
declared/defined using a prototype (or you get warnings from gcc when you
compile with -Wmissing-prototypes (which is recommended for developers)).
But you're saying bar() should *not* have a prototype.


-1 on dropping the "void" from the definition. I disagree it is bad form,
and it sets us up for inconsistencies.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From gward@python.net  Fri Sep  1 03:10:47 2000
From: gward@python.net (Greg Ward)
Date: Thu, 31 Aug 2000 19:10:47 -0700
Subject: [Python-Dev] ANNOUNCE: Distutils 0.9.2
Message-ID: <20000831191047.C31473@python.net>

...just in time for the Python 2.0b1 feature freeze, Distutils 0.9.2 has
been released.  Changes since 0.9.1:

  * fixed bug that broke extension-building under Windows for older
    setup scripts (not using the new Extension class)
      
  * new version of bdist_wininst command and associated tools: fixes
    some bugs, produces a smaller exeuctable, and has a nicer GUI
    (thanks to Thomas Heller)
		
  * added some hooks to 'setup()' to allow some slightly sneaky ways
    into the Distutils, in addition to the standard "run 'setup()'
    from a setup script"
	
Get your copy today:

  http://www.python.org/sigs/distutils-sig/download.html
  
        Greg


From jeremy@beopen.com  Fri Sep  1 03:40:25 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 22:40:25 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
Message-ID: <14767.5913.521593.234904@bitdiddle.concentric.net>

Quick note on BDFL-approved style for C code.

I recently changed a line in gcmodule.c from
static int debug;
to 
static int debug = 0;

The change is redundant, as several people pointed out, because the C
std requires debug to be initialized to 0.  I didn't realize this.
Inadvertently, however, I made the right change.  The preferred style
is to be explicit about initialization if other code depends on or
assumes that it is initialized to a particular value -- even if that
value is 0.

If the code is guaranteed to do an assignment of its own before the
first use, it's okay to omit the initialization with the decl.

Jeremy





From greg@cosc.canterbury.ac.nz  Fri Sep  1 03:37:36 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 01 Sep 2000 14:37:36 +1200 (NZST)
Subject: [Python-Dev] Pragmas: Just say "No!"
In-Reply-To: <39AE5E79.C2C91730@lemburg.com>
Message-ID: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal@lemburg.com>:

> If it's just the word itself that's bugging you, then
> we can have a separate discussion on that. Perhaps "assume"
> or "declare" would be a better candidates.

Yes, "declare" would be better. ALthough I'm still somewhat
uncomfortable with the idea of naming a language feature
before having a concrete example of what it's going to be
 used for.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From guido@beopen.com  Fri Sep  1 04:54:10 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 22:54:10 -0500
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: Your message of "Thu, 31 Aug 2000 18:16:55 EST."
 <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
References: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
Message-ID: <200009010354.WAA30234@cj20424-a.reston1.va.home.com>

> A quick glance at audiodev.py shows that it seems to support only
> Sun and SGI.  Should it be extended?

Yes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Sep  1 05:00:37 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:00:37 -0500
Subject: [Python-Dev] Namespace collision between lib/xml and site-packages/xml
In-Reply-To: Your message of "Fri, 01 Sep 2000 06:29:47 +0800."
 <39AEDC5B.333F737E@per.dem.csiro.au>
References: <39AEDC5B.333F737E@per.dem.csiro.au>
Message-ID: <200009010400.XAA30273@cj20424-a.reston1.va.home.com>

> On July 26 I reported that the new xml package in the standard library
> collides with and overrides the xml package from the xml-sig that may be
> installed in site-packages. This is still the case. The new package does
> not have the same functionality as the one in site-packages, and hence
> my application (and others relying on similar functionality) gets an
> import error. I understood that it was planned that the new library xml
> package would check for the site-package version, and transparently hand
> over to it if it existed. It's not really an option to remove/rename the
> xml package in the std lib, or to break existing xml-based code...
> 
> Of course, this might be fixed by 2.0b1, or is it a feature that will be
> frozen out <wry smile>?
> 
> Fred's response was:
> "  I expect we'll be making the package in site-packages an extension
> provider for the xml package in the standard library.  I'm planning to
> discuss this issue at today's PythonLabs meeting." 

I remember our group discussion about this.  What's currently
implemented is what we decided there, based upon (Fred's
representation of) what the XML-sig wanted.  So you don't like this
either, right?

I believe there are two conflicting desires here: (1) the standard XML
package by the core should be named simply "xml"; (2) you want the old
XML-sig code (which lives in a package named "xml" but installed in
site-packages) to override the core xml package.

I don't think that's possible -- at least not without a hack that's
too ugly to accept.

You might be able to get the old XML-sig code to override the core xml
package by creating a symlink named _xmlplus to it in site-packages
though.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Sep  1 05:04:02 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:04:02 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: Your message of "Thu, 31 Aug 2000 19:12:08 MST."
 <20000831191208.G3278@lyra.org>
References: <200009010002.RAA23432@slayer.i.sourceforge.net>
 <20000831191208.G3278@lyra.org>
Message-ID: <200009010404.XAA30306@cj20424-a.reston1.va.home.com>

> You're setting yourself up for inconsistency if you don't always use a
> prototypical definition. In the above example, foo() must be
> declared/defined using a prototype (or you get warnings from gcc when you
> compile with -Wmissing-prototypes (which is recommended for developers)).
> But you're saying bar() should *not* have a prototype.
> 
> 
> -1 on dropping the "void" from the definition. I disagree it is bad form,
> and it sets us up for inconsistencies.

We discussed this briefly today in our group chat, and I'm +0 or
Greg's recommendation (that's +0 on keeping (void) in definitions).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From tim_one@email.msn.com  Fri Sep  1 04:12:25 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:12:25 -0400
Subject: [Python-Dev] RE: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <20000831191208.G3278@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFJHDAA.tim_one@email.msn.com>

[Greg Stein]
> ...
> static int foo(void) { ... }
> int bar() { ... }
>
> You're setting yourself up for inconsistency if you don't always use a
> prototypical definition. In the above example, foo() must be
> declared/defined using a prototype (or you get warnings from gcc when you
> compile with -Wmissing-prototypes (which is recommended for developers)).
> But you're saying bar() should *not* have a prototype.

This must be about the pragmatics of gcc, as the C std doesn't say any of
that stuff -- to the contrary, in a *definition* (as opposed to a
declaration), bar() and bar(void) are identical in meaning (as far as the
std goes).

But I confess I don't use gcc at the moment, and have mostly used C
grudgingly the past 5 years when porting things to C++, and my "bad style"
really came from the latter (C++ doesn't cater to K&R-style decls or
"Miranda prototypes" at all, so "thing(void)" is just an eyesore there).

> -1 on dropping the "void" from the definition. I disagree it is bad form,
> and it sets us up for inconsistencies.

Good enough for me -- I'll change it back.





From fdrake@beopen.com  Fri Sep  1 04:28:59 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 23:28:59 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: <14767.5913.521593.234904@bitdiddle.concentric.net>
References: <14767.5913.521593.234904@bitdiddle.concentric.net>
Message-ID: <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > The change is redundant, as several people pointed out, because the C
 > std requires debug to be initialized to 0.  I didn't realize this.
 > Inadvertently, however, I made the right change.  The preferred style
 > is to be explicit about initialization if other code depends on or
 > assumes that it is initialized to a particular value -- even if that
 > value is 0.

  According to the BDFL?  He's told me *not* to do that if setting it
to 0 (or NULL, in case of a pointer), but I guess that was several
years ago now (before I went to CNRI, I think).
  I need to get a style guide written, I suppose!  -sigh-
  (I agree the right thing is to use explicit initialization, and
would go so far as to say to *always* use it for readability and
robustness in the face of changing code.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From jeremy@beopen.com  Fri Sep  1 04:37:41 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 23:37:41 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
References: <14767.5913.521593.234904@bitdiddle.concentric.net>
 <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
Message-ID: <14767.9349.324188.289319@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake@beopen.com> writes:

  FLD> Jeremy Hylton writes:
  >> The change is redundant, as several people pointed out, because
  >> the C std requires debug to be initialized to 0.  I didn't
  >> realize this.  Inadvertently, however, I made the right change.
  >> The preferred style is to be explicit about initialization if
  >> other code depends on or assumes that it is initialized to a
  >> particular value -- even if that value is 0.

  FLD>   According to the BDFL?  He's told me *not* to do that if
  FLD>   setting it
  FLD> to 0 (or NULL, in case of a pointer), but I guess that was
  FLD> several years ago now (before I went to CNRI, I think).

It's these chat sessions.  They bring out the worst in him <wink>.

Jeremy


From guido@beopen.com  Fri Sep  1 05:36:05 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:36:05 -0500
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: Your message of "Thu, 31 Aug 2000 23:28:59 -0400."
 <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
References: <14767.5913.521593.234904@bitdiddle.concentric.net>
 <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
Message-ID: <200009010436.XAA06824@cj20424-a.reston1.va.home.com>

> Jeremy Hylton writes:
>  > The change is redundant, as several people pointed out, because the C
>  > std requires debug to be initialized to 0.  I didn't realize this.
>  > Inadvertently, however, I made the right change.  The preferred style
>  > is to be explicit about initialization if other code depends on or
>  > assumes that it is initialized to a particular value -- even if that
>  > value is 0.

Fred:
>   According to the BDFL?  He's told me *not* to do that if setting it
> to 0 (or NULL, in case of a pointer), but I guess that was several
> years ago now (before I went to CNRI, I think).

Can't remember that now.  I told Jeremy what he wrote here.

>   I need to get a style guide written, I suppose!  -sigh-

Yes!

>   (I agree the right thing is to use explicit initialization, and
> would go so far as to say to *always* use it for readability and
> robustness in the face of changing code.)

No -- initializing variables that are assigned to first thing later is
less readable.  The presence or absence of the initialization should
be a subtle hint on whether the initial value is used.  If the code
changes, change the initialization.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From tim_one@email.msn.com  Fri Sep  1 04:40:47 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:40:47 -0400
Subject: [Python-Dev] test_popen2 broken on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>

FYI, we know that test_popen2 is broken on Windows.  I'm in the process of
fixing it.




From fdrake@beopen.com  Fri Sep  1 04:42:59 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 23:42:59 -0400 (EDT)
Subject: [Python-Dev] test_popen2 broken on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>
Message-ID: <14767.9667.205457.791956@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > FYI, we know that test_popen2 is broken on Windows.  I'm in the process of
 > fixing it.

  If you can think of a good test case for os.popen4(), I'd love to
see it!  I couldn't think of one earlier that even had a remote chance
of being portable.  If you can make one that passes on Windows, I'll
either adapt it or create an alternate for Unix.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Fri Sep  1 04:55:41 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:55:41 -0400
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
In-Reply-To: <20000831082821.B3569@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFMHDAA.tim_one@email.msn.com>

{Trent Mick]
> Tim (or anyone with python-list logs), can you forward this to Sachin (who
> reported the bug).

Sorry for not getting back to you sooner,  I just fwd'ed the fellow's
problem as an FYI for the Python-Dev'ers, not as something crucial for
2.0b1.  His symptom is a kernel panic in what looked like a pre-release OS,
and that's certainly not your fault!  Like he said:

>> I guess my next step is to log a bug with Apple.

Since nobody else spoke up, I'll fwd your msg to him eventually, but that
will take a little time to find his address via DejaNews, & it's not a
priority tonight.




From tim_one@email.msn.com  Fri Sep  1 05:03:18 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 00:03:18 -0400
Subject: [Python-Dev] test_popen2 broken on Windows
In-Reply-To: <14767.9667.205457.791956@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEFNHDAA.tim_one@email.msn.com>

[Fred]
>   If you can think of a good test case for os.popen4(), I'd love to
> see it!  I couldn't think of one earlier that even had a remote chance
> of being portable.  If you can make one that passes on Windows, I'll
> either adapt it or create an alternate for Unix.  ;)

Not tonight.  I've never used popen4 in my life, and disapprove of almost
all functions with trailing digits in their names.  Also most movies,  and
especially after "The Hidden 2".  How come nobody writes song sequels?
"Stairway to Heaven 2", say, or "Beethoven's Fifth Symphony 3"?  That's one
for Barry to ponder ...

otoh-trailing-digits-are-a-sign-of-quality-in-an-os-name-ly y'rs  - tim




From Mark.Favas@per.dem.csiro.au  Fri Sep  1 08:31:57 2000
From: Mark.Favas@per.dem.csiro.au (Favas, Mark (EM, Floreat))
Date: Fri, 1 Sep 2000 15:31:57 +0800
Subject: [Python-Dev] Namespace collision between lib/xml and site-pac
 kages/xml
Message-ID: <C03F68DA202BD411B00700B0D022B09E1AD950@martok.wa.CSIRO.AU>

Guido wrote:
>I remember our group discussion about this.  What's currently
>implemented is what we decided there, based upon (Fred's
>representation of) what the XML-sig wanted.  So you don't like this
>either, right?

Hey - not so. I saw the original problem, asked about it, was told it would
be discussed, heard nothing of the results of the disccussion, saw that I
still had the same problem close to the release of 2.0b1, thought maybe it
had slipped through the cracks, and asked again in an effort to help. I
apologise if it came across in any other way.

>I believe there are two conflicting desires here: (1) the standard XML
>package by the core should be named simply "xml"; (2) you want the old
>XML-sig code (which lives in a package named "xml" but installed in
>site-packages) to override the core xml package.

I'm happy with (1) being the standard XML package - I thought from Fred's
original post that there might be some way of having both work together. 

>I don't think that's possible -- at least not without a hack that's
>too ugly to accept.

Glad to have this clarified.

>You might be able to get the old XML-sig code to override the core xml
>package by creating a symlink named _xmlplus to it in site-packages
>though.

Thanks for the suggestion - I'll try it. Since my code has to run on Windows
as well, probably the best thing I can do is bundle up the xml-sig stuff in
my distribution, call it something else, and get around it all that way.

Mark


From thomas@xs4all.net  Fri Sep  1 08:41:24 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 09:41:24 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <200009010002.RAA23432@slayer.i.sourceforge.net>; from tim_one@users.sourceforge.net on Thu, Aug 31, 2000 at 05:02:01PM -0700
References: <200009010002.RAA23432@slayer.i.sourceforge.net>
Message-ID: <20000901094123.L12695@xs4all.nl>

On Thu, Aug 31, 2000 at 05:02:01PM -0700, Tim Peters wrote:

> Log Message:
> Supply missing prototypes for new Py_{Get,Set}RecursionLimit; fixes compiler wngs;
> un-analize Get's definition ("void" is needed only in declarations, not defns, &
> is generally considered bad style in the latter).

Funny. I asked this while ANSIfying, and opinions where, well, scattered :)
There are a lot more where that one came from. (See the Modules/ subdir
<wink>)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Fri Sep  1 08:54:09 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 09:54:09 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.50,2.51
In-Reply-To: <200009010239.TAA27288@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Thu, Aug 31, 2000 at 07:39:03PM -0700
References: <200009010239.TAA27288@slayer.i.sourceforge.net>
Message-ID: <20000901095408.M12695@xs4all.nl>

On Thu, Aug 31, 2000 at 07:39:03PM -0700, Guido van Rossum wrote:

> Add parens suggested by gcc -Wall.

No! This groups the checks wrong. HASINPLACE(v) *has* to be true for any of
the other tests to happen. I apologize for botching the earlier 2 versions
and failing to check them, I've been a bit swamped in work the past week :P
I've checked them in the way they should be. (And checked, with gcc -Wall,
this time. The error is really gone.)

> ! 	else if (HASINPLACE(v)
>   		  && ((v->ob_type->tp_as_sequence != NULL &&
> ! 		      (f = v->ob_type->tp_as_sequence->sq_inplace_concat) != NULL))
>   		 || (v->ob_type->tp_as_number != NULL &&
>   		     (f = v->ob_type->tp_as_number->nb_inplace_add) != NULL))
> --- 814,821 ----
>   			return x;
>   	}
> ! 	else if ((HASINPLACE(v)
>   		  && ((v->ob_type->tp_as_sequence != NULL &&
> ! 		       (f = v->ob_type->tp_as_sequence->sq_inplace_concat)
> ! 		       != NULL)))
>   		 || (v->ob_type->tp_as_number != NULL &&
>   		     (f = v->ob_type->tp_as_number->nb_inplace_add) != NULL))

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Fri Sep  1 09:43:56 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 10:43:56 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz>
Message-ID: <39AF6C4C.62451C87@lemburg.com>

Greg Ewing wrote:
> 
> "M.-A. Lemburg" <mal@lemburg.com>:
> 
> > If it's just the word itself that's bugging you, then
> > we can have a separate discussion on that. Perhaps "assume"
> > or "declare" would be a better candidates.
> 
> Yes, "declare" would be better. ALthough I'm still somewhat
> uncomfortable with the idea of naming a language feature
> before having a concrete example of what it's going to be
>  used for.

I gave some examples in the other pragma thread. The main
idea behind "declare" is to define flags at compilation
time, the encoding of string literals being one of the
original motivations for introducing these flags:

declare encoding = "latin-1"
x = u"This text will be interpreted as Latin-1 and stored as Unicode"

declare encoding = "ascii"
y = u"This is supposed to be ASCII, but contains äöü Umlauts - error !"

A similar approach could be done for 8-bit string literals
provided that the default encoding allows storing the
decoded values.

Say the default encoding is "utf-8", then you could write:

declare encoding = "latin-1"
x = "These are the German Umlauts: äöü"
# x would the be assigned the corresponding UTF-8 value of that string

Another motivation for using these flags is providing the
compiler with information about possible assumptions it
can make:

declare globals = "constant"

The compiler can then add code which caches all global
lookups in locals for subsequent use.

The reason I'm advertising a new keyword is that we need
a way to tell the compiler about these things from within
the source file. This is currently not possible, but is needed
to allow different modules (from possibly different authors)
to work together without the need to adapt their source
files.

Which flags will actually become available is left to 
a different discussion.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep  1 09:55:09 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 10:55:09 +0200
Subject: [Python-Dev] lookdict
References: <200008312232.AAA14305@python.inrialpes.fr>
Message-ID: <39AF6EED.7A591932@lemburg.com>

Vladimir Marangozov wrote:
> 
> I'd like to request some clarifications on the recently checked
> dict patch. How it is supposed to work and why is this solution okay?
> 
> What's the exact purpose of the 2nd string specialization patch?
> 
> Besides that, I must say that now the interpreter is noticeably slower
> and MAL and I were warning you kindly about this code, which was
> fine tuned over the years. It is very sensible and was optimized to death.
> The patch that did make it was labeled "not ready" and I would have
> appreciated another round of review. Not that I disagree, but now I feel
> obliged to submit another patch to make some obvious perf improvements
> (at least), which simply duplicates work... Fred would have done them
> very well, but I haven't had the time to say much about the implementation
> because the laconic discussion on the Patch Manager went about
> functionality.
> 
> Now I'd like to bring this on python-dev and see what exactly happened
> to lookdict and what the BeOpen team agreed on regarding this function.

Just for the record:

Python 1.5.2: 3050 pystones
Python 2.0b1: 2850 pystones before the lookup patch
              2900 pystones after the lookup patch
My old considerably patched Python 1.5:
              4000 pystones

I like Fred's idea about the customized and auto-configuring
lookup mechanism. This should definitely go into 2.1... perhaps
even with a hook that allows C extensions to drop in their own
implementations for certain types of dictionaries, e.g. ones
using perfect hash tables.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From ping@lfw.org  Fri Sep  1 10:11:15 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 1 Sep 2000 05:11:15 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.58306.977241.439169@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10009010506380.1061-100000@skuld.lfw.org>

On Thu, 31 Aug 2000, Charles G Waldman wrote:
>  >     3. Play some raw audio data, given a string of bytes and a
>  >        sampling rate.
> 
> This would never be possible unless you also specifed the format and
> encoding of the raw data - are they 8bit, 16-bit, signed, unsigned,
> bigendian, littlendian, linear, logarithmic ("mu_law"), etc?

You're right, you do have to specify such things.  But when you
do, i'm quite confident that this should be possible, at least
for a variety of common cases.  Certainly raw audio data should
be playable in at least *some* fashion, and we also have a bunch
of very nice functions in the audioop module that can do automatic
conversions if we want to get fancy.

> Trying to do anything with sound in a
> platform-independent manner is near-impossible.  Even the same
> "platform" (e.g. RedHat 6.2 on Intel) will behave differently
> depending on what soundcard is installed.

Are you talking about OSS vs. ALSA?  Didn't they at least try to
keep some of the basic parts of the interface the same?


-- ?!ng



From Moshe Zadka <moshez@math.huji.ac.il>  Fri Sep  1 10:42:58 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 1 Sep 2000 12:42:58 +0300 (IDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.42287.968420.289804@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10009011242120.22219-100000@sundial>

On Thu, 31 Aug 2000, Jeremy Hylton wrote:

> Is the test for linuxaudiodev supposed to play the Spanish Inquistion
> .au file?  I just realized that the test does absolutely nothing on my
> machine.  (I guess I need to get my ears to raise an exception if they
> don't hear anything.)
> 
> I can play the .au file and I use a variety of other audio tools
> regularly.  Is Peter still maintaining it or can someone else offer
> some assistance?

It's probably not the case, but check it isn't skipped. I've added code to
liberally skip it in case the user has no permission or no soundcard.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tim_one@email.msn.com  Fri Sep  1 12:34:46 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 07:34:46 -0400
Subject: [Python-Dev] Prerelease Python fun on Windows!
Message-ID: <LNBBLJKPBEHFEDALKOLCIEGJHDAA.tim_one@email.msn.com>

A prerelease of the Python2.0b1 Windows installer is now available via
anonymous FTP, from

    python.beopen.com

file

    /pub/windows/beopen-python2b1p1-20000901.exe
    5,766,988 bytes

Be sure to set FTP Binary mode before you get it.

This is not *the* release.  Indeed, the docs are still from some old
pre-beta version of Python 1.6 (sorry, Fred, but I'm really sleepy!).  What
I'm trying to test here is the installer, and the basic integrity of the
installation.  A lot has changed, and we hope all for the better.

Points of particular interest:

+ I'm running a Win98SE laptop.  The install works great for me.  How
  about NT?  2000?  95?  ME?  Win64 <shudder>?

+ For the first time ever, the Windows installer should *not* require
  adminstrator privileges under NT or 2000.  This is untested.  If you
  log in as an adminstrator, it should write Python's registry info
  under HKEY_LOCAL_MACHINE.  If not an adminstrator, it should pop up
  an informative message and write the registry info under
  HKEY_CURRENT_USER instead.  Does this work?  This prerelease includes
  a patch from Mark Hammond that makes Python look in HKCU before HKLM
  (note that that also allows users to override the HKLM settings, if
  desired).

+ Try
    python lib/test/regrtest.py

  test_socket is expected to fail if you're not on a network, or logged
  into your ISP, at the time your run the test suite.  Otherwise
  test_socket is expected to pass.  All other tests are expected to
  pass (although, as always, a number of Unix-specific tests should get
  skipped).

+ Get into a DOS-box Python, and try

      import Tkinter
      Tkinter._test()

  This installation of Python should not interfere with, or be damaged
  by, any other installation of Tcl/Tk you happen to have lying around.
  This is also the first time we're using Tcl/Tk 8.3.2, and that needs
  wider testing too.

+ If the Tkinter test worked, try IDLE!
  Start -> Programs -> Python20 -> IDLE.

+ There is no time limit on this installation.  But if you use it for
  more than 30 days, you're going to have to ask us to pay you <wink>.

windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim




From nascheme@enme.ucalgary.ca  Fri Sep  1 14:34:46 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 07:34:46 -0600
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <200009010401.VAA20868@slayer.i.sourceforge.net>; from Jeremy Hylton on Thu, Aug 31, 2000 at 09:01:59PM -0700
References: <200009010401.VAA20868@slayer.i.sourceforge.net>
Message-ID: <20000901073446.A4782@keymaster.enme.ucalgary.ca>

On Thu, Aug 31, 2000 at 09:01:59PM -0700, Jeremy Hylton wrote:
> set the default threshold much higher
> we don't need to run gc frequently

Are you sure setting it that high (5000 as opposed to 100) is a good
idea?  Did you do any benchmarking?  If with-gc is going to be on by
default in 2.0 then I would agree with setting it high.  If the GC is
optional then I think it should be left as it is.  People explicitly
enabling the GC obviously have a problem with cyclic garbage.

So, is with-gc going to be default?  At this time I would vote no.

  Neil


From jeremy@beopen.com  Fri Sep  1 15:24:46 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 1 Sep 2000 10:24:46 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <20000901073446.A4782@keymaster.enme.ucalgary.ca>
References: <200009010401.VAA20868@slayer.i.sourceforge.net>
 <20000901073446.A4782@keymaster.enme.ucalgary.ca>
Message-ID: <14767.48174.81843.299662@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme@enme.ucalgary.ca> writes:

  NS> On Thu, Aug 31, 2000 at 09:01:59PM -0700, Jeremy Hylton wrote:
  >> set the default threshold much higher we don't need to run gc
  >> frequently

  NS> Are you sure setting it that high (5000 as opposed to 100) is a
  NS> good idea?  Did you do any benchmarking?  If with-gc is going to
  NS> be on by default in 2.0 then I would agree with setting it high.
  NS> If the GC is optional then I think it should be left as it is.
  NS> People explicitly enabling the GC obviously have a problem with
  NS> cyclic garbage.

  NS> So, is with-gc going to be default?  At this time I would vote
  NS> no.

For 2.0b1, it will be on by default, which is why I set the threshold
so high.  If we get a lot of problem reports, we can change either
decision for 2.0 final.

Do you disagree?  If so, why?

Even people who do have problems with cyclic garbage don't necessarily
need a collection every 100 allocations.  (Is my understanding of what
the threshold measures correct?)  This threshold causes GC to occur so
frequently that it can happen during the *compilation* of a small
Python script.

Example: The code in Tools/compiler seems to have a cyclic reference
problem, because it's memory consumption drops when GC is enabled.
But the difference in total memory consumption with the threshold at
100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

Jeremy


From skip@mojam.com (Skip Montanaro)  Fri Sep  1 15:13:39 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 1 Sep 2000 09:13:39 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
Message-ID: <14767.47507.843792.223790@beluga.mojam.com>

I'm trying to get Zope 2.2.1 to build to I can use gc to track down a memory 
leak.  In working my way through some compilation errors I noticed that
Zope's cPickle.c appears to be somewhat different than Python's version.
(Haven't checked cStringIO.c yet, but I imagine there may be a couple
differences there as well.)

Should we try to sync them up before 2.0b1?  Before 2.0final?  Wait until
2.1?  If so, should I post a patch to the SourceForge Patch Manager or send
diffs to Jim (or both)?

Skip


From thomas@xs4all.net  Fri Sep  1 15:34:52 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 16:34:52 +0200
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEGJHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Sep 01, 2000 at 07:34:46AM -0400
Message-ID: <20000901163452.N12695@xs4all.nl>

On Fri, Sep 01, 2000 at 07:34:46AM -0400, Tim Peters wrote:

> + I'm running a Win98SE laptop.  The install works great for me.  How
>   about NT?  2000?  95?  ME?  Win64 <shudder>?

It runs fine under Win98 (FE) on my laptop.

> + Try
>     python lib/test/regrtest.py

No strange failures.

> + Get into a DOS-box Python, and try
> 
>       import Tkinter
>       Tkinter._test()
> 
>   This installation of Python should not interfere with, or be damaged
>   by, any other installation of Tcl/Tk you happen to have lying around.
>   This is also the first time we're using Tcl/Tk 8.3.2, and that needs
>   wider testing too.

Correctly uses 8.3.2, and not the 8.1 (or so) that came with Python 1.5.2

> + If the Tkinter test worked, try IDLE!
>   Start -> Programs -> Python20 -> IDLE.

Works, too. I had a funny experience, though. I tried to quit the
interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.
And then I started IDLE, and IDLE started up, the menus worked, I could open
a new window, but I couldn't type anything. And then I had a bluescreen. But
after the reboot, everything worked fine, even doing the exact same things.

Could just be windows crashing on me, it does that often enough, even on
freshly installed machines. Something about bad karma or something ;)

> + There is no time limit on this installation.  But if you use it for
>   more than 30 days, you're going to have to ask us to pay you <wink>.

> windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim

"Hmmm... I think I'll call you lunch."

(Well, Windows may not be green, but it's definately not ripe yet! Not for
me, anyway :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Fri Sep  1 16:43:32 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 10:43:32 -0500
Subject: [Python-Dev] _PyPclose
Message-ID: <200009011543.KAA09487@cj20424-a.reston1.va.home.com>

The _PyPclose fix looks good, Tim!

The sad thing is that if they had implemented their own data structure
to keep track of the mapping between files and processes, none of this
would have been necessary.  Look:

_PyPopenProcs is a dictionary whose keys are FILE* pointers wrapped in
Python longs, and whose values are lists of length 2 containing a
process handle and a file count.  Pseudocode:

# global:
    _PyPopenProcs = None

# in _PyPopen:
    global _PyPopenProcs
    if _PyPopenProcs is None:
        _PyPopenProcs = {}
    files = <list of files created>
    list = [process_handle, len(files)]
    for file in files:
	_PyPopenProcs[id(file)] = list

# in _PyPclose(file):
    global _PyPopenProcs
    list = _PyPopenProcs[id(file)]
    nfiles = list[1]
    if nfiles > 1:
	list[1] = nfiles-1
    else:
	<wait for the process status>
    del _PyPopenProcs[id(file)]
    if len(_PyPopenProcs) == 0:
        _PyPopenProcs = None

This expands to pages of C code!  There's a *lot* of code dealing with
creating the Python objects, error checking, etc.  I bet that it all
would become much smaller and more readable if a custom C-based data
structure was used.  A linked list associating files with processes
would be all that's needed.  We can even aford a linear search of the
list to see if we just closed the last file open for this process.

Sigh.  Maybe for another time.

(That linked list would require a lock of its own.  Fine.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From skip@mojam.com (Skip Montanaro)  Fri Sep  1 16:03:30 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 1 Sep 2000 10:03:30 -0500 (CDT)
Subject: [Python-Dev] DEBUG_SAVEALL feature for gc not in 2.0b1?
Message-ID: <14767.50498.896689.445018@beluga.mojam.com>

--udFi5TfI4P
Content-Type: text/plain; charset=us-ascii
Content-Description: message body text
Content-Transfer-Encoding: 7bit


Neil sent me a patch a week or two ago that implemented a DEBUG_SAVEALL flag
for the gc module.  If set, it assigns all cyclic garbage to gc.garbage
instead of deleting it, thus resurrecting the garbage so you can inspect it.
This seems not to have made it into the CS repository.

I think this is good mojo and deserves to be in the distribution, if not for
the release, then for 2.1 at least.  I've attached the patch Neil sent me
(which includes code, doc and test updates).  It's helped me track down one
(stupid) cyclic trash bug in my own code.  Neil, unless there are strong
arguments to the contrary, I recommend you submit a patch to SF.

Skip


--udFi5TfI4P
Content-Type: application/octet-stream
Content-Description: patch to get gc to resurrect garbage instead of freeing it
Content-Disposition: attachment;
	filename="saveall.patch"
Content-Transfer-Encoding: base64

Ci0tVXVndldBZnNnaWVaUnFnawpDb250ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9
dXMtYXNjaWkKCk9uIFN1biwgQXVnIDIwLCAyMDAwIGF0IDA5OjE4OjQ3UE0gLTA1MDAsIFNr
aXAgTW9udGFuYXJvIHdyb3RlOgo+IFllcywgSSB3b3VsZCBsb3ZlIGl0IGlmIHlvdSBjb3Vs
ZCAicmVpbmNhcm5hdGUiIGdhcmJhZ2UgdW5kZXIgZ2MgbW9kdWxlCj4gY29udHJvbC4KCk9r
YXksIGEgcGF0Y2ggaXMgYXR0YWNoZWQuICBJZiB0aGlzIHdvcmtzIG9rYXkgZm9yIHlvdSB0
aGVuIEkgd2lsbAp1cGxvYWQgaXQgdG8gc291cmNlZm9yZ2UuCgogIE5laWwKCi0tVXVndldB
ZnNnaWVaUnFnawpDb250ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9dXMtYXNjaWkK
Q29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9InNhdmVfYWxsLmRp
ZmYiCgpJbmRleDogMC4xOC9Nb2R1bGVzL2djbW9kdWxlLmMKLS0tIDAuMTgvTW9kdWxlcy9n
Y21vZHVsZS5jIFNhdCwgMTIgQXVnIDIwMDAgMTE6NDc6NDQgLTA0MDAgbmFzIChweXRob24v
Ti8zOF9nY21vZHVsZS5jIDEuMyA2NDQpCisrKyAwLjE4KHcpL01vZHVsZXMvZ2Ntb2R1bGUu
YyBTdW4sIDIwIEF1ZyAyMDAwIDIzOjIwOjQxIC0wNDAwIG5hcyAocHl0aG9uL04vMzhfZ2Nt
b2R1bGUuYyAxLjMgNjQ0KQpAQCAtNTMsMTAgKzUzLDEyIEBACiAjZGVmaW5lIERFQlVHX1VO
Q09MTEVDVEFCTEUJKDE8PDIpIC8qIHByaW50IHVuY29sbGVjdGFibGUgb2JqZWN0cyAqLwog
I2RlZmluZSBERUJVR19JTlNUQU5DRVMJCSgxPDwzKSAvKiBwcmludCBpbnN0YW5jZXMgKi8K
ICNkZWZpbmUgREVCVUdfT0JKRUNUUwkJKDE8PDQpIC8qIHByaW50IG90aGVyIG9iamVjdHMg
Ki8KKyNkZWZpbmUgREVCVUdfU0FWRUFMTAkJKDE8PDUpIC8qIHNhdmUgYWxsIGdhcmJhZ2Ug
aW4gZ2MuZ2FyYmFnZSAqLwogI2RlZmluZSBERUJVR19MRUFLCQlERUJVR19DT0xMRUNUQUJM
RSB8IFwKIAkJCQlERUJVR19VTkNPTExFQ1RBQkxFIHwgXAogCQkJCURFQlVHX0lOU1RBTkNF
UyB8IFwKLQkJCQlERUJVR19PQkpFQ1RTCisJCQkJREVCVUdfT0JKRUNUUyB8IFwKKwkJCQlE
RUJVR19TQVZFQUxMCiBzdGF0aWMgaW50IGRlYnVnOwogCiAvKiBsaXN0IG9mIHVuY29sbGVj
dGFibGUgb2JqZWN0cyAqLwpAQCAtMzAwLDE5ICszMDIsMTcgQEAKIGhhbmRsZV9maW5hbGl6
ZXJzKFB5R0NfSGVhZCAqZmluYWxpemVycywgUHlHQ19IZWFkICpvbGQpCiB7CiAJUHlHQ19I
ZWFkICpnYzsKLQlpZiAoZ2FyYmFnZSA9PSBOVUxMKSB7Ci0JCWdhcmJhZ2UgPSBQeUxpc3Rf
TmV3KDApOwotCX0KIAlmb3IgKGdjID0gZmluYWxpemVycy0+Z2NfbmV4dDsgZ2MgIT0gZmlu
YWxpemVyczsKIAkJCWdjID0gZmluYWxpemVycy0+Z2NfbmV4dCkgewogCQlQeU9iamVjdCAq
b3AgPSBQeU9iamVjdF9GUk9NX0dDKGdjKTsKLQkJLyogQWRkIGFsbCBpbnN0YW5jZXMgdG8g
YSBQeXRob24gYWNjZXNzaWJsZSBsaXN0IG9mIGdhcmJhZ2UgKi8KLQkJaWYgKFB5SW5zdGFu
Y2VfQ2hlY2sob3ApKSB7CisJCWlmICgoZGVidWcgJiBERUJVR19TQVZFQUxMKSB8IFB5SW5z
dGFuY2VfQ2hlY2sob3ApKSB7CisJCQkvKiBJZiBTQVZFQUxMIGlzIG5vdCBzZXQgdGhlbiBq
dXN0IGFwcGVuZAorCQkJICogaW5zdGFuY2VzIHRvIHRoZSBsaXN0IG9mIGdhcmJhZ2UuICBX
ZSBhc3N1bWUKKwkJCSAqIHRoYXQgYWxsIG9iamVjdHMgaW4gdGhlIGZpbmFsaXplcnMgbGlz
dCBhcmUKKwkJCSAqIHJlYWNoYWJsZSBmcm9tIGluc3RhbmNlcy4gKi8KIAkJCVB5TGlzdF9B
cHBlbmQoZ2FyYmFnZSwgb3ApOwogCQl9Ci0JCS8qIFdlIGFzc3VtZSB0aGF0IGFsbCBvYmpl
Y3RzIGluIGZpbmFsaXplcnMgYXJlIHJlYWNoYWJsZSBmcm9tCi0JCSAqIGluc3RhbmNlcy4g
IE9uY2Ugd2UgYWRkIHRoZSBpbnN0YW5jZXMgdG8gdGhlIGdhcmJhZ2UgbGlzdAotCQkgKiBl
dmVyeXRoaW5nIGlzIHJlYWNoYWJsZSBmcm9tIFB5dGhvbiBhZ2Fpbi4gKi8KKwkJLyogb2Jq
ZWN0IGlzIG5vdyByZWFjaGFibGUgYWdhaW4gKi8gCiAJCWdjX2xpc3RfcmVtb3ZlKGdjKTsK
IAkJZ2NfbGlzdF9hcHBlbmQoZ2MsIG9sZCk7CiAJfQpAQCAtMzI5LDE3ICszMjksMTcgQEAK
IAl3aGlsZSAodW5yZWFjaGFibGUtPmdjX25leHQgIT0gdW5yZWFjaGFibGUpIHsKIAkJUHlH
Q19IZWFkICpnYyA9IHVucmVhY2hhYmxlLT5nY19uZXh0OwogCQlQeU9iamVjdCAqb3AgPSBQ
eU9iamVjdF9GUk9NX0dDKGdjKTsKLQkJLyoKLQkJUHlMaXN0X0FwcGVuZChnYXJiYWdlLCBv
cCk7Ci0JCSovCi0JCWlmICgoY2xlYXIgPSBvcC0+b2JfdHlwZS0+dHBfY2xlYXIpICE9IE5V
TEwpIHsKLQkJCVB5X0lOQ1JFRihvcCk7Ci0JCQljbGVhcigoUHlPYmplY3QgKilvcCk7Ci0J
CQlQeV9ERUNSRUYob3ApOworCQlpZiAoZGVidWcgJiBERUJVR19TQVZFQUxMKSB7CisJCQlQ
eUxpc3RfQXBwZW5kKGdhcmJhZ2UsIG9wKTsKKwkJfSBlbHNlIHsKKwkJCWlmICgoY2xlYXIg
PSBvcC0+b2JfdHlwZS0+dHBfY2xlYXIpICE9IE5VTEwpIHsKKwkJCQlQeV9JTkNSRUYob3Ap
OworCQkJCWNsZWFyKChQeU9iamVjdCAqKW9wKTsKKwkJCQlQeV9ERUNSRUYob3ApOworCQkJ
fQogCQl9Ci0JCS8qIG9ubHkgdHJ5IHRvIGNhbGwgdHBfY2xlYXIgb25jZSBmb3IgZWFjaCBv
YmplY3QgKi8KIAkJaWYgKHVucmVhY2hhYmxlLT5nY19uZXh0ID09IGdjKSB7Ci0JCQkvKiBz
dGlsbCBhbGl2ZSwgbW92ZSBpdCwgaXQgbWF5IGRpZSBsYXRlciAqLworCQkJLyogb2JqZWN0
IGlzIHN0aWxsIGFsaXZlLCBtb3ZlIGl0LCBpdCBtYXkgZGllIGxhdGVyICovCiAJCQlnY19s
aXN0X3JlbW92ZShnYyk7CiAJCQlnY19saXN0X2FwcGVuZChnYywgb2xkKTsKIAkJfQpAQCAt
NjA2LDYgKzYwNiw3IEBACiAiICBERUJVR19VTkNPTExFQ1RBQkxFIC0gUHJpbnQgdW5yZWFj
aGFibGUgYnV0IHVuY29sbGVjdGFibGUgb2JqZWN0cyBmb3VuZC5cbiIKICIgIERFQlVHX0lO
U1RBTkNFUyAtIFByaW50IGluc3RhbmNlIG9iamVjdHMuXG4iCiAiICBERUJVR19PQkpFQ1RT
IC0gUHJpbnQgb2JqZWN0cyBvdGhlciB0aGFuIGluc3RhbmNlcy5cbiIKKyIgIERFQlVHX1NB
VkVBTEwgLSBTYXZlIG9iamVjdHMgdG8gZ2MuZ2FyYmFnZSByYXRoZXIgdGhhbiBmcmVlaW5n
IHRoZW0uXG4iCiAiICBERUJVR19MRUFLIC0gRGVidWcgbGVha2luZyBwcm9ncmFtcyAoZXZl
cnl0aGluZyBidXQgU1RBVFMpLlxuIgogOwogCkBAIC03MDUsOSArNzA2LDcgQEAKIAkJCSAg
ICAgIE5VTEwsCiAJCQkgICAgICBQWVRIT05fQVBJX1ZFUlNJT04pOwogCWQgPSBQeU1vZHVs
ZV9HZXREaWN0KG0pOwotCWlmIChnYXJiYWdlID09IE5VTEwpIHsKLQkJZ2FyYmFnZSA9IFB5
TGlzdF9OZXcoMCk7Ci0JfQorCWdhcmJhZ2UgPSBQeUxpc3RfTmV3KDApOwogCVB5RGljdF9T
ZXRJdGVtU3RyaW5nKGQsICJnYXJiYWdlIiwgZ2FyYmFnZSk7CiAJUHlEaWN0X1NldEl0ZW1T
dHJpbmcoZCwgIkRFQlVHX1NUQVRTIiwKIAkJCVB5SW50X0Zyb21Mb25nKERFQlVHX1NUQVRT
KSk7CkBAIC03MTksNiArNzE4LDggQEAKIAkJCVB5SW50X0Zyb21Mb25nKERFQlVHX0lOU1RB
TkNFUykpOwogCVB5RGljdF9TZXRJdGVtU3RyaW5nKGQsICJERUJVR19PQkpFQ1RTIiwKIAkJ
CVB5SW50X0Zyb21Mb25nKERFQlVHX09CSkVDVFMpKTsKKwlQeURpY3RfU2V0SXRlbVN0cmlu
ZyhkLCAiREVCVUdfU0FWRUFMTCIsCisJCQlQeUludF9Gcm9tTG9uZyhERUJVR19TQVZFQUxM
KSk7CiAJUHlEaWN0X1NldEl0ZW1TdHJpbmcoZCwgIkRFQlVHX0xFQUsiLAogCQkJUHlJbnRf
RnJvbUxvbmcoREVCVUdfTEVBSykpOwogfQpJbmRleDogMC4xOC9MaWIvdGVzdC90ZXN0X2dj
LnB5Ci0tLSAwLjE4L0xpYi90ZXN0L3Rlc3RfZ2MucHkgU2F0LCAxMiBBdWcgMjAwMCAxMTo0
Nzo0NCAtMDQwMCBuYXMgKHB5dGhvbi9PLzFfdGVzdF9nYy5weSAxLjIgNjQ0KQorKysgMC4x
OCh3KS9MaWIvdGVzdC90ZXN0X2djLnB5IFN1biwgMjAgQXVnIDIwMDAgMjM6MTc6MDUgLTA0
MDAgbmFzIChweXRob24vTy8xX3Rlc3RfZ2MucHkgMS4yIDY0NCkKQEAgLTEsMTggKzEsMzUg
QEAKK2Zyb20gdGVzdF9zdXBwb3J0IGltcG9ydCB2ZXJib3NlLCBUZXN0RmFpbGVkCiBpbXBv
cnQgZ2MKIAorZGVmIHJ1bl90ZXN0KG5hbWUsIHRodW5rKToKKyAgICBpZiB2ZXJib3NlOgor
ICAgICAgICBwcmludCAidGVzdGluZyAlcy4uLiIgJSBuYW1lLAorICAgIHRyeToKKyAgICAg
ICAgdGh1bmsoKQorICAgIGV4Y2VwdCBUZXN0RmFpbGVkOgorICAgICAgICBpZiB2ZXJib3Nl
OgorICAgICAgICAgICAgcHJpbnQgImZhaWxlZCAoZXhwZWN0ZWQgJXMgYnV0IGdvdCAlcyki
ICUgKHJlc3VsdCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB0ZXN0X3Jlc3VsdCkKKyAgICAgICAgcmFpc2UgVGVzdEZhaWxlZCwg
bmFtZQorICAgIGVsc2U6CisgICAgICAgIGlmIHZlcmJvc2U6CisgICAgICAgICAgICBwcmlu
dCAib2siCisKIGRlZiB0ZXN0X2xpc3QoKToKICAgICBsID0gW10KICAgICBsLmFwcGVuZChs
KQogICAgIGdjLmNvbGxlY3QoKQogICAgIGRlbCBsCi0gICAgYXNzZXJ0IGdjLmNvbGxlY3Qo
KSA9PSAxCisgICAgaWYgZ2MuY29sbGVjdCgpICE9IDE6CisgICAgICAgIHJhaXNlIFRlc3RG
YWlsZWQKIAogZGVmIHRlc3RfZGljdCgpOgogICAgIGQgPSB7fQogICAgIGRbMV0gPSBkCiAg
ICAgZ2MuY29sbGVjdCgpCiAgICAgZGVsIGQKLSAgICBhc3NlcnQgZ2MuY29sbGVjdCgpID09
IDEKKyAgICBpZiBnYy5jb2xsZWN0KCkgIT0gMToKKyAgICAgICAgcmFpc2UgVGVzdEZhaWxl
ZAogCiBkZWYgdGVzdF90dXBsZSgpOgogICAgIGwgPSBbXQpAQCAtMjEsNyArMzgsOCBAQAog
ICAgIGdjLmNvbGxlY3QoKQogICAgIGRlbCB0CiAgICAgZGVsIGwKLSAgICBhc3NlcnQgZ2Mu
Y29sbGVjdCgpID09IDIKKyAgICBpZiBnYy5jb2xsZWN0KCkgIT0gMjoKKyAgICAgICAgcmFp
c2UgVGVzdEZhaWxlZAogCiBkZWYgdGVzdF9jbGFzcygpOgogICAgIGNsYXNzIEE6CkBAIC0y
OSw3ICs0Nyw4IEBACiAgICAgQS5hID0gQQogICAgIGdjLmNvbGxlY3QoKQogICAgIGRlbCBB
Ci0gICAgYXNzZXJ0IGdjLmNvbGxlY3QoKSA+IDAKKyAgICBpZiBnYy5jb2xsZWN0KCkgPT0g
MDoKKyAgICAgICAgcmFpc2UgVGVzdEZhaWxlZAogCiBkZWYgdGVzdF9pbnN0YW5jZSgpOgog
ICAgIGNsYXNzIEE6CkBAIC0zOCw3ICs1Nyw4IEBACiAgICAgYS5hID0gYQogICAgIGdjLmNv
bGxlY3QoKQogICAgIGRlbCBhCi0gICAgYXNzZXJ0IGdjLmNvbGxlY3QoKSA+IDAKKyAgICBp
ZiBnYy5jb2xsZWN0KCkgPT0gMDoKKyAgICAgICAgcmFpc2UgVGVzdEZhaWxlZAogCiBkZWYg
dGVzdF9tZXRob2QoKToKICAgICBjbGFzcyBBOgpAQCAtNDcsNyArNjcsOCBAQAogICAgIGEg
PSBBKCkKICAgICBnYy5jb2xsZWN0KCkKICAgICBkZWwgYQotICAgIGFzc2VydCBnYy5jb2xs
ZWN0KCkgPiAwCisgICAgaWYgZ2MuY29sbGVjdCgpID09IDA6CisgICAgICAgIHJhaXNlIFRl
c3RGYWlsZWQKIAogZGVmIHRlc3RfZmluYWxpemVyKCk6CiAgICAgY2xhc3MgQToKQEAgLTYw
LDQwICs4MSw3MyBAQAogICAgIGIgPSBCKCkKICAgICBiLmIgPSBiCiAgICAgZ2MuY29sbGVj
dCgpCi0gICAgZ2MuZ2FyYmFnZVs6XSA9IFtdCiAgICAgZGVsIGEKICAgICBkZWwgYgotICAg
IGFzc2VydCBnYy5jb2xsZWN0KCkgPiAwCi0gICAgYXNzZXJ0IGlkKGdjLmdhcmJhZ2VbMF0p
ID09IGlkX2EKKyAgICBpZiBnYy5jb2xsZWN0KCkgPT0gMDoKKyAgICAgICAgcmFpc2UgVGVz
dEZhaWxlZAorICAgIGZvciBvYmogaW4gZ2MuZ2FyYmFnZToKKyAgICAgICAgaWYgaWQob2Jq
KSA9PSBpZF9hOgorICAgICAgICAgICAgZGVsIG9iai5hCisgICAgICAgICAgICBicmVhawor
ICAgIGVsc2U6CisgICAgICAgIHJhaXNlIFRlc3RGYWlsZWQKIAogZGVmIHRlc3RfZnVuY3Rp
b24oKToKICAgICBkID0ge30KICAgICBleGVjKCJkZWYgZigpOiBwYXNzXG4iKSBpbiBkCiAg
ICAgZ2MuY29sbGVjdCgpCiAgICAgZGVsIGQKLSAgICBhc3NlcnQgZ2MuY29sbGVjdCgpID09
IDIKKyAgICBpZiBnYy5jb2xsZWN0KCkgIT0gMjoKKyAgICAgICAgcmFpc2UgVGVzdEZhaWxl
ZAogCitkZWYgdGVzdF9zYXZlYWxsKCk6CisgICAgZGVidWcgPSBnYy5nZXRfZGVidWcoKQor
ICAgIGdjLnNldF9kZWJ1ZygwKQorICAgIGdjLnNldF9kZWJ1ZyhnYy5ERUJVR19TQVZFQUxM
KQorICAgIGwgPSBbXQorICAgIGwuYXBwZW5kKGwpCisgICAgaWRfbCA9IGlkKGwpCisgICAg
ZGVsIGwKKyAgICBnYy5jb2xsZWN0KCkKKyAgICB0cnk6CisgICAgICAgIGZvciBvYmogaW4g
Z2MuZ2FyYmFnZToKKyAgICAgICAgICAgIGlmIGlkKG9iaikgPT0gaWRfbDoKKyAgICAgICAg
ICAgICAgICBvYmpbOl0gPSBbXQorICAgICAgICAgICAgICAgIGJyZWFrCisgICAgICAgIGVs
c2U6CisgICAgICAgICAgICByYWlzZSBUZXN0RmFpbGVkCisgICAgZmluYWxseToKKyAgICAg
ICAgZ2Muc2V0X2RlYnVnKGRlYnVnKQogCi1kZWYgdGVzdF9hbGwoKToKKyAgICAKIAorZGVm
IHRlc3RfYWxsKCk6CisgICAgcnVuX3Rlc3QoImxpc3RzIiwgdGVzdF9saXN0KQorICAgIHJ1
bl90ZXN0KCJkaWN0cyIsIHRlc3RfZGljdCkKKyAgICBydW5fdGVzdCgidHVwbGVzIiwgdGVz
dF90dXBsZSkKKyAgICBydW5fdGVzdCgiY2xhc3NlcyIsIHRlc3RfY2xhc3MpCisgICAgcnVu
X3Rlc3QoImluc3RhbmNlcyIsIHRlc3RfaW5zdGFuY2UpCisgICAgcnVuX3Rlc3QoIm1ldGhv
ZHMiLCB0ZXN0X21ldGhvZCkKKyAgICBydW5fdGVzdCgiZnVuY3Rpb25zIiwgdGVzdF9mdW5j
dGlvbikKKyAgICBydW5fdGVzdCgiZmluYWxpemVycyIsIHRlc3RfZmluYWxpemVyKQorICAg
IHJ1bl90ZXN0KCJzYXZlYWxsIiwgdGVzdF9zYXZlYWxsKQorCitkZWYgdGVzdCgpOgorICAg
IGlmIHZlcmJvc2U6CisgICAgICAgIHByaW50ICJkaXNhYmxpbmcgYXV0b21hdGljIGNvbGxl
Y3Rpb24iCiAgICAgZW5hYmxlZCA9IGdjLmlzZW5hYmxlZCgpCiAgICAgZ2MuZGlzYWJsZSgp
Ci0gICAgYXNzZXJ0IG5vdCBnYy5pc2VuYWJsZWQoKQorICAgIGFzc2VydCBub3QgZ2MuaXNl
bmFibGVkKCkgCiAKLSAgICB0ZXN0X2xpc3QoKQotICAgIHRlc3RfZGljdCgpCi0gICAgdGVz
dF90dXBsZSgpCi0gICAgdGVzdF9jbGFzcygpCi0gICAgdGVzdF9pbnN0YW5jZSgpCi0gICAg
dGVzdF9tZXRob2QoKQotICAgIHRlc3RfZmluYWxpemVyKCkKLSAgICB0ZXN0X2Z1bmN0aW9u
KCkKKyAgICB0ZXN0X2FsbCgpCiAKICAgICAjIHRlc3QgZ2MuZW5hYmxlKCkgZXZlbiBpZiBH
QyBpcyBkaXNhYmxlZCBieSBkZWZhdWx0CisgICAgaWYgdmVyYm9zZToKKyAgICAgICAgcHJp
bnQgInJlc3RvcmluZyBhdXRvbWF0aWMgY29sbGVjdGlvbiIKICAgICBnYy5lbmFibGUoKQog
ICAgIGFzc2VydCBnYy5pc2VuYWJsZWQoKQogICAgIGlmIG5vdCBlbmFibGVkOgogICAgICAg
ICBnYy5kaXNhYmxlKCkKIAogCi10ZXN0X2FsbCgpCit0ZXN0KCkKSW5kZXg6IDAuMTgvRG9j
L2xpYi9saWJnYy50ZXgKLS0tIDAuMTgvRG9jL2xpYi9saWJnYy50ZXggU2F0LCAxMiBBdWcg
MjAwMCAxMTo0Nzo0NCAtMDQwMCBuYXMgKHB5dGhvbi9PLzQ1X2xpYmdjLnRleCAxLjIgNjQ0
KQorKysgMC4xOCh3KS9Eb2MvbGliL2xpYmdjLnRleCBTdW4sIDIwIEF1ZyAyMDAwIDIzOjI2
OjEyIC0wNDAwIG5hcyAocHl0aG9uL08vNDVfbGliZ2MudGV4IDEuMiA2NDQpCkBAIC0yLDgg
KzIsOCBAQAogICAgICAgICAgR2FyYmFnZSBDb2xsZWN0b3IgaW50ZXJmYWNlfQogCiBcZGVj
bGFyZW1vZHVsZXtleHRlbnNpb259e2djfQotXG1vZHVsZWF1dGhvcntOZWlsIFNjaGVtZW5h
dWVyfXtuYXNjaGVtZUBlbm1lLnVjYWxnYXJ5LmNhfQotXHNlY3Rpb25hdXRob3J7TmVpbCBT
Y2hlbWVuYXVlcn17bmFzY2hlbWVAZW5tZS51Y2FsZ2FyeS5jYX0KK1xtb2R1bGVhdXRob3J7
TmVpbCBTY2hlbWVuYXVlcn17bmFzQGFyY3RyaXguY29tfQorXHNlY3Rpb25hdXRob3J7TmVp
bCBTY2hlbWVuYXVlcn17bmFzQGFyY3RyaXguY29tfQogCiBUaGlzIG1vZHVsZSBwcm92aWRl
cyBhbiBpbnRlcmZhY2UgdG8gdGhlIG9wdGlvbmFsIGdhcmJhZ2UgY29sbGVjdG9yLiAgSXQK
IHByb3ZpZGVzIHRoZSBhYmlsaXR5IHRvIGRpc2FibGUgdGhlIGNvbGxlY3RvciwgdHVuZSB0
aGUgY29sbGVjdGlvbgpAQCAtNzksNyArNzksOSBAQAogQSBsaXN0IG9mIG9iamVjdHMgd2hp
Y2ggdGhlIGNvbGxlY3RvciBmb3VuZCB0byBiZSB1bnJlYWNoYWJsZQogYnV0IGNvdWxkIG5v
dCBiZSBmcmVlZCAodW5jb2xsZWN0YWJsZSBvYmplY3RzKS4gIE9iamVjdHMgdGhhdCBoYXZl
CiBcbWV0aG9ke19fZGVsX18oKX0gbWV0aG9kcyBhbmQgY3JlYXRlIHBhcnQgb2YgYSByZWZl
cmVuY2UgY3ljbGUgY2F1c2UKLXRoZSBlbnRpcmUgcmVmZXJlbmNlIGN5Y2xlIHRvIGJlIHVu
Y29sbGVjdGFibGUuICAKK3RoZSBlbnRpcmUgcmVmZXJlbmNlIGN5Y2xlIHRvIGJlIHVuY29s
bGVjdGFibGUuICBJZgorXGNvbnN0YW50e0RFQlVHX1NBVkVBTEx9IGlzIHNldCwgdGhlbiBh
bGwgdW5yZWFjaGFibGUgb2JqZWN0cyB3aWxsCitiZSBhZGRlZCB0byB0aGlzIGxpc3QgcmF0
aGVyIHRoYW4gZnJlZWQuCiBcZW5ke2RhdGFkZXNjfQogCiAKQEAgLTExMSw4ICsxMTMsMTQg
QEAKIHNldCwgcHJpbnQgaW5mb3JtYXRpb24gYWJvdXQgb2JqZWN0cyBvdGhlciB0aGFuIGlu
c3RhbmNlIG9iamVjdHMgZm91bmQuCiBcZW5ke2RhdGFkZXNjfQogCitcYmVnaW57ZGF0YWRl
c2N9e0RFQlVHX1NBVkVBTEx9CitXaGVuIHNldCwgYWxsIHVucmVhY2hhYmxlIG9iamVjdHMg
Zm91bmQgd2lsbCBiZSBhcHBlbmRlZCB0bworXHZhcntnYXJiYWdlfSByYXRoZXIgdGhhbiBi
ZWluZyBmcmVlZC4gIFRoaXMgY2FuIGJlIHVzZWZ1bCBmb3IgZGVidWdnaW5nCithIGxlYWtp
bmcgcHJvZ3JhbS4KK1xlbmR7ZGF0YWRlc2N9CisKIFxiZWdpbntkYXRhZGVzY317REVCVUdf
TEVBS30KIFRoZSBkZWJ1Z2dpbmcgZmxhZ3MgbmVjZXNzYXJ5IGZvciB0aGUgY29sbGVjdG9y
IHRvIHByaW50CiBpbmZvcm1hdGlvbiBhYm91dCBhIGxlYWtpbmcgcHJvZ3JhbSAoZXF1YWwg
dG8gXGNvZGV7REVCVUdfQ09MTEVDVEFCTEUgfAotREVCVUdfVU5DT0xMRUNUQUJMRSB8IERF
QlVHX0lOU1RBTkNFUyB8IERFQlVHX09CSkVDVFN9KS4gIAorREVCVUdfVU5DT0xMRUNUQUJM
RSB8IERFQlVHX0lOU1RBTkNFUyB8IERFQlVHX09CSkVDVFMgfCBERUJVR19TQVZFQUxMfSku
ICAKIFxlbmR7ZGF0YWRlc2N9CgotLVV1Z3ZXQWZzZ2llWlJxZ2stLQo=

--udFi5TfI4P--


From guido@beopen.com  Fri Sep  1 17:31:26 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 11:31:26 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 10:55:09 +0200."
 <39AF6EED.7A591932@lemburg.com>
References: <200008312232.AAA14305@python.inrialpes.fr>
 <39AF6EED.7A591932@lemburg.com>
Message-ID: <200009011631.LAA09876@cj20424-a.reston1.va.home.com>

Thanks, Marc-Andre, for pointing out that Fred's lookdict code is
actually an improvement.

The reason for all this is that we found that lookdict() calls
PyObject_Compare() without checking for errors.  If there's a key that
raises an error when compared to another key, the keys compare unequal
and an exception is set, which may disturb an exception that the
caller of PyDict_GetItem() might be calling.  PyDict_GetItem() is
documented as never raising an exception.  This is actually not strong
enough; it was actually intended to never clear an exception either.
The potential errors from PyObject_Compare() violate this contract.
Note that these errors are nothing new; PyObject_Compare() has been
able to raise exceptions for a long time, e.g. from errors raised by
__cmp__().

The first-order fix is to call PyErr_Fetch() and PyErr_restore()
around the calls to PyObject_Compare().  This is slow (for reasons
Vladimir points out) even though Fred was very careful to only call
PyErr_Fetch() or PyErr_Restore() when absolutely necessary and only
once per lookdict call.  The second-order fix therefore is Fred's
specialization for string-keys-only dicts.

There's another problem: as fixed, lookdict needs a current thread
state!  (Because the exception state is stored per thread.)  There are
cases where PyDict_GetItem() is called when there's no thread state!
The first one we found was Tim Peters' patch for _PyPclose (see
separate message).  There may be others -- we'll have to fix these
when we find them (probably after 2.0b1 is released but hopefully
before 2.0 final).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From akuchlin@mems-exchange.org  Fri Sep  1 16:42:01 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 1 Sep 2000 11:42:01 -0400
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <14767.47507.843792.223790@beluga.mojam.com>; from skip@mojam.com on Fri, Sep 01, 2000 at 09:13:39AM -0500
References: <14767.47507.843792.223790@beluga.mojam.com>
Message-ID: <20000901114201.B5855@kronos.cnri.reston.va.us>

On Fri, Sep 01, 2000 at 09:13:39AM -0500, Skip Montanaro wrote:
>leak.  In working my way through some compilation errors I noticed that
>Zope's cPickle.c appears to be somewhat different than Python's version.
>(Haven't checked cStringIO.c yet, but I imagine there may be a couple
>differences there as well.)

There are also diffs in cStringIO.c, though not ones that affect
functionality: ANSI-fication, and a few changes to the Python API
(PyObject_Length -> PyObject_Size, PyObject_NEW -> PyObject_New, &c).

The cPickle.c changes look to be:
    * ANSIfication.
    * API changes.
    * Support for Unicode strings.

The API changes are the most annoying ones, since you need to add
#ifdefs in order for the module to compile with both 1.5.2 and 2.0.
(Might be worth seeing if this can be alleviated with a few strategic
macros, though I think not...)

--amk



From nascheme@enme.ucalgary.ca  Fri Sep  1 16:48:21 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 09:48:21 -0600
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <14767.48174.81843.299662@bitdiddle.concentric.net>; from Jeremy Hylton on Fri, Sep 01, 2000 at 10:24:46AM -0400
References: <200009010401.VAA20868@slayer.i.sourceforge.net> <20000901073446.A4782@keymaster.enme.ucalgary.ca> <14767.48174.81843.299662@bitdiddle.concentric.net>
Message-ID: <20000901094821.A5571@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
> Even people who do have problems with cyclic garbage don't necessarily
> need a collection every 100 allocations.  (Is my understanding of what
> the threshold measures correct?)

It collects every net threshold0 allocations.  If you create and delete
1000 container objects in a loop then no collection would occur.

> But the difference in total memory consumption with the threshold at
> 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

The last time I did benchmarks with PyBench and pystone I found that the
difference between threshold0 = 100 and threshold0 = 0 (ie. infinity)
was small.  Remember that the collector only counts container objects.
Creating a thousand dicts with lots of non-container objects inside of
them could easily cause an out of memory situation.

Because of the generational collection usually only threshold0 objects
are examined while collecting.  Thus, setting threshold0 low has the
effect of quickly moving objects into the older generations.  Collection
is quick because only a few objects are examined.  

A portable way to find the total allocated memory would be nice.
Perhaps Vladimir's malloc will help us here.  Alternatively we could
modify PyCore_MALLOC to keep track of it in a global variable.  I think
collecting based on an increase in the total allocated memory would work
better.  What do you think?

More benchmarks should be done too.  Your compiler would probably be a
good candidate.  I won't have time today but maybe tonight.

  Neil


From gward@mems-exchange.org  Fri Sep  1 16:49:45 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Fri, 1 Sep 2000 11:49:45 -0400
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>; from ping@lfw.org on Thu, Aug 31, 2000 at 06:16:55PM -0500
References: <14766.50976.102853.695767@buffalo.fnal.gov> <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
Message-ID: <20000901114945.A15688@ludwig.cnri.reston.va.us>

On 31 August 2000, Ka-Ping Yee said:
> Just so you know -- i was definitely able to get this to work at
> some point before when we were trying to fix this.  I changed
> test_linuxaudiodev and it played the .AU file correctly.  I haven't
> had time to survey what the state of the various modules is now,
> though -- i'll have a look around and see what's going on.

I have three copies of test_linuxaudiodev.py in my Lib/test directory:
the original, Ping's version, and Michael Hudson's version.  I can't
remember who hacked whose, ie. if Michael or Ping's is earlier.
Regardless, none of them work.  Here's how they fail:

$ /python Lib/test/regrtest.py test_linuxaudiodev
test_linuxaudiodev
1 test OK.

...but the sound is horrible: various people opined on this list, many
months ago when I first reported the problem, that it's probably a
format problem.  (The wav/au mixup seems a likely candidate; it can't be
an endianness problem, since the .au file is 8-bit!)

$ ./python Lib/test/regrtest.py test_linuxaudiodev-ping
test_linuxaudiodev-ping
Warning: can't open Lib/test/output/test_linuxaudiodev-ping
test test_linuxaudiodev-ping crashed -- audio format not supported by linuxaudiodev: None
1 test failed: test_linuxaudiodev-ping

...no sound.

./python Lib/test/regrtest.py test_linuxaudiodev-hudson
test_linuxaudiodev-hudson
Warning: can't open Lib/test/output/test_linuxaudiodev-hudson
test test_linuxaudiodev-hudson crashed -- linuxaudiodev.error: (11, 'Resource temporarily unavailable')
1 test failed: test_linuxaudiodev-hudson

...this is the oddest one of all: I get the "crashed" message
immediately, but then the sound starts playing.  I hear "Nobody expects
the Spani---" but then it stops, the test script terminates, and I get
the "1 test failed" message and my shell prompt back.

Confused as hell, and completely ignorant of computer audio,

        Greg
-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From nascheme@enme.ucalgary.ca  Fri Sep  1 16:56:27 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 09:56:27 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14767.50498.896689.445018@beluga.mojam.com>; from Skip Montanaro on Fri, Sep 01, 2000 at 10:03:30AM -0500
References: <14767.50498.896689.445018@beluga.mojam.com>
Message-ID: <20000901095627.B5571@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 10:03:30AM -0500, Skip Montanaro wrote:
> Neil sent me a patch a week or two ago that implemented a DEBUG_SAVEALL flag
> for the gc module.

I didn't submit the patch to SF yet because I am thinking of redesigning
the gc module API.  I really don't like the current bitmask interface
for setting options.  The redesign could wait for 2.1 but it would be
nice to not have to change a published API.

Does anyone have any ideas on a good interface for setting various GC
options?  There may be many options and they may change with the
evolution of the collector.  My current idea is to use something like:

    gc.get_option(<name>)

    gc.set_option(<name>, <value>, ...)

with the module defining constants for options.  For example:

    gc.set_option(gc.DEBUG_LEAK, 1)

would enable leak debugging.  Does this look okay?  Should I try to get
it done for 2.0?

  Neil


From guido@beopen.com  Fri Sep  1 18:05:21 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 12:05:21 -0500
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: Your message of "Fri, 01 Sep 2000 16:34:52 +0200."
 <20000901163452.N12695@xs4all.nl>
References: <20000901163452.N12695@xs4all.nl>
Message-ID: <200009011705.MAA10274@cj20424-a.reston1.va.home.com>

> Works, too. I had a funny experience, though. I tried to quit the
> interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.

Really?  It didn't exit?  What had you done before?  I do this all the
time without problems.

> And then I started IDLE, and IDLE started up, the menus worked, I could open
> a new window, but I couldn't type anything. And then I had a bluescreen. But
> after the reboot, everything worked fine, even doing the exact same things.
> 
> Could just be windows crashing on me, it does that often enough, even on
> freshly installed machines. Something about bad karma or something ;)

Well, Fredrik Lundh also had some blue screens which he'd reduced to a
DECREF of NULL in _tkinter.  Buyt not fixed, so this may still be
lurking.

On the other hand your laptop might have been screwy already by that
time...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Sep  1 18:10:35 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 12:10:35 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.50,2.51
In-Reply-To: Your message of "Fri, 01 Sep 2000 09:54:09 +0200."
 <20000901095408.M12695@xs4all.nl>
References: <200009010239.TAA27288@slayer.i.sourceforge.net>
 <20000901095408.M12695@xs4all.nl>
Message-ID: <200009011710.MAA10327@cj20424-a.reston1.va.home.com>

> On Thu, Aug 31, 2000 at 07:39:03PM -0700, Guido van Rossum wrote:
> 
> > Add parens suggested by gcc -Wall.

Thomas replied:

> No! This groups the checks wrong. HASINPLACE(v) *has* to be true for any of
> the other tests to happen. I apologize for botching the earlier 2 versions
> and failing to check them, I've been a bit swamped in work the past week :P
> I've checked them in the way they should be. (And checked, with gcc -Wall,
> this time. The error is really gone.)

Doh!  Good catch.  But after looking at the code, I understand why
it's so hard to get right: it's indented wrong, and it's got very
convoluted logic.

Suggestion: don't try to put so much stuff in a single if expression!
I find the version below much clearer, even though it may test for
f==NULL a few extra times.  Thomas, can you verify that I haven't
changed the semantics this time?  You can check it in if you like it,
or you can have me check it in.

PyObject *
PyNumber_InPlaceAdd(PyObject *v, PyObject *w)
{
	PyObject * (*f)(PyObject *, PyObject *) = NULL;
	PyObject *x;

	if (PyInstance_Check(v)) {
		if (PyInstance_HalfBinOp(v, w, "__iadd__", &x,
					 PyNumber_Add, 0) <= 0)
			return x;
	}
	else if (HASINPLACE(v)) {
		if (v->ob_type->tp_as_sequence != NULL)
			f = v->ob_type->tp_as_sequence->sq_inplace_concat;
		if (f == NULL && v->ob_type->tp_as_number != NULL)
			f = v->ob_type->tp_as_number->nb_inplace_add;
		if (f != NULL)
			return (*f)(v, w);
	}

	BINOP(v, w, "__add__", "__radd__", PyNumber_Add);

	if (v->ob_type->tp_as_sequence != NULL) {
		f = v->ob_type->tp_as_sequence->sq_concat;
		if (f != NULL)
			return (*f)(v, w);
	}
	if (v->ob_type->tp_as_number != NULL) {
		if (PyNumber_Coerce(&v, &w) != 0)
			return NULL;
		if (v->ob_type->tp_as_number != NULL) {
			f = v->ob_type->tp_as_number->nb_add;
			if (f != NULL)
				x = (*f)(v, w);
		}
		Py_DECREF(v);
		Py_DECREF(w);
		if (f != NULL)
			return x;
	}

	return type_error("bad operand type(s) for +=");
}

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Fri Sep  1 17:23:01 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 18:23:01 +0200
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
References: <14767.47507.843792.223790@beluga.mojam.com> <20000901114201.B5855@kronos.cnri.reston.va.us>
Message-ID: <39AFD7E5.93C0F437@lemburg.com>

Andrew Kuchling wrote:
> 
> On Fri, Sep 01, 2000 at 09:13:39AM -0500, Skip Montanaro wrote:
> >leak.  In working my way through some compilation errors I noticed that
> >Zope's cPickle.c appears to be somewhat different than Python's version.
> >(Haven't checked cStringIO.c yet, but I imagine there may be a couple
> >differences there as well.)
> 
> There are also diffs in cStringIO.c, though not ones that affect
> functionality: ANSI-fication, and a few changes to the Python API
> (PyObject_Length -> PyObject_Size, PyObject_NEW -> PyObject_New, &c).
> 
> The cPickle.c changes look to be:
>     * ANSIfication.
>     * API changes.
>     * Support for Unicode strings.

Huh ? There is support for Unicode objects in Python's cPickle.c...
does Zope's version do something different ?
 
> The API changes are the most annoying ones, since you need to add
> #ifdefs in order for the module to compile with both 1.5.2 and 2.0.
> (Might be worth seeing if this can be alleviated with a few strategic
> macros, though I think not...)
> 
> --amk
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From skip@mojam.com (Skip Montanaro)  Fri Sep  1 17:48:14 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 1 Sep 2000 11:48:14 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <20000901114201.B5855@kronos.cnri.reston.va.us>
References: <14767.47507.843792.223790@beluga.mojam.com>
 <20000901114201.B5855@kronos.cnri.reston.va.us>
Message-ID: <14767.56782.649516.231305@beluga.mojam.com>

    amk> There are also diffs in cStringIO.c, though not ones that affect
    amk> functionality: ...

    amk> The API changes are the most annoying ones, since you need to add
    amk> #ifdefs in order for the module to compile with both 1.5.2 and 2.0.

After posting my note I compared the Zope and Py2.0 versions of cPickle.c.
There are enough differences (ANISfication, gc, unicode support) that it
appears not worthwhile to try and get Python 2.0's cPickle to run under
1.5.2 and 2.0.  I tried simply commenting out the relevant lines in Zope's
lib/Components/Setup file.  Zope built fine without them, though I haven't
yet had a chance to test that configuration.  I don't use either cPickle or
cStringIO, nor do I actually use much of Zope, just ZServer and
DocumentTemplates, so I doubt my code would exercise either module heavily.


Skip



From loewis@informatik.hu-berlin.de  Fri Sep  1 18:02:58 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Fri, 1 Sep 2000 19:02:58 +0200 (MET DST)
Subject: [Python-Dev] DEBUG_SAVEALL feature for gc not in 2.0b1?
Message-ID: <200009011702.TAA26607@pandora.informatik.hu-berlin.de>

> Does this look okay?  Should I try to get it done for 2.0?

I don't see the need for improvement. I consider it a fairly low-level
API, so having bit masks is fine: users dealing with this settings
should know what a bit mask is.

As for the naming of the specific flags: So far, all of them are for
debugging, as would be the proposed DEBUG_SAVEALL. You also have
set/get_threshold, which clearly controls a different kind of setting.

Unless you come up with ten or so additional settings that *must* be
there, I don't see the need for generalizing the API. Why is

  gc.set_option(gc.THRESHOLD, 1000, 100, 10)

so much better than

  gc.set_threshold(1000, 100, 10)

???

Even if you find the need for a better API, it should be possible to
support the current one for a couple more years, no?

Martin



From skip@mojam.com (Skip Montanaro)  Fri Sep  1 18:24:58 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 1 Sep 2000 12:24:58 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <39AFD7E5.93C0F437@lemburg.com>
References: <14767.47507.843792.223790@beluga.mojam.com>
 <20000901114201.B5855@kronos.cnri.reston.va.us>
 <39AFD7E5.93C0F437@lemburg.com>
Message-ID: <14767.58986.387449.850867@beluga.mojam.com>

    >> The cPickle.c changes look to be:
    >> * ANSIfication.
    >> * API changes.
    >> * Support for Unicode strings.

    MAL> Huh ? There is support for Unicode objects in Python's cPickle.c...
    MAL> does Zope's version do something different ?

Zope is still running 1.5.2 and thus has a version of cPickle that is at
least that old.  The RCS revision string is

     * $Id: cPickle.c,v 1.72 2000/05/09 18:05:09 jim Exp $

I saw new unicode functions in the Python 2.0 version of cPickle that
weren't in the version distributed with Zope 2.2.1.  Here's a grep buffer
from XEmacs:

    cd /home/dolphin/skip/src/Zope/lib/Components/cPickle/
    grep -n -i unicode cPickle.c /dev/null

    grep finished with no matches found at Fri Sep  1 12:39:57

Skip


From mal@lemburg.com  Fri Sep  1 18:36:17 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 19:36:17 +0200
Subject: [Python-Dev] Verbosity of the Makefile
Message-ID: <39AFE911.927AEDDF@lemburg.com>

This is pure cosmetics, but I found that the latest CVS versions
of the Parser Makefile have become somewhat verbose.

Is this really needed ?

Also, I'd suggest adding a line

.SILENT:

to the top-level Makefile to make possible errors more visible
(without the parser messages the Makefile messages for a clean
run fit on a 25-line display).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From bwarsaw@beopen.com  Fri Sep  1 18:54:16 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 1 Sep 2000 13:54:16 -0400 (EDT)
Subject: [Python-Dev] Re: Cookie.py security
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
 <20000830145152.A24581@illuminatus.timo-tasi.org>
Message-ID: <14767.60744.647516.232634@anthem.concentric.net>

>>>>> "timo" ==   <timo@timo-tasi.org> writes:

    timo> Right now, the shortcut 'Cookie.Cookie()' returns an
    timo> instance of the SmartCookie, which uses Pickle.  Most extant
    timo> examples of using the Cookie module use this shortcut.

    timo> We could change 'Cookie.Cookie()' to return an instance of
    timo> SimpleCookie, which does not use Pickle.  Unfortunately,
    timo> this may break existing code (like Mailman), but there is a
    timo> lot of code out there that it won't break.

Not any more!  Around the Mailman 2.0beta5 time frame, I completely
revamped Mailman's cookie stuff because lots of people were having
problems.  One of the things I suspected was that the binary data in
cookies was giving some browsers headaches.  So I took great pains to
make sure that Mailman only passed in carefully crafted string data,
avoiding Cookie.py's pickle stuff.

I use marshal in the application code, and I go further to `hexlify'
the marshaled data (see binascii.hexlify() in Python 2.0).  That way,
I'm further guaranteed that the cookie data will consist only of
characters in the set [0-9A-F], and I don't need to quote the data
(which was another source of browser incompatibility).  I don't think
I've seen any cookie problems reported from people using Mailman
2.0b5.

[Side note: I also changed Mailman to use session cookies by default,
but that probably had no effect on the problems.]

[Side side note: I also had to patch Morsel.OutputString() in my copy
of Cookie.py because there was a test for falseness that should have
been a test for the empty string explicitly.  Otherwise this fails:

    c['foo']['max-age'] = 0

but this succeeds

    c['foo']['max-age'] = "0"

Don't know if that's relevant for Tim's current version.]

    timo> Also, people could still use the SmartCookie and
    timo> SerialCookie classes, but not they would be more likely to
    timo> read them in the documentation because they are "outside the
    timo> beaten path".

My vote would be to get rid of SmartCookie and SerialCookie and stay
with simple string cookie data only.  Applications can do fancier
stuff on their own if they want.

-Barry


From thomas@xs4all.net  Fri Sep  1 19:00:49 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 20:00:49 +0200
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: <200009011705.MAA10274@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Sep 01, 2000 at 12:05:21PM -0500
References: <20000901163452.N12695@xs4all.nl> <200009011705.MAA10274@cj20424-a.reston1.va.home.com>
Message-ID: <20000901200049.L477@xs4all.nl>

On Fri, Sep 01, 2000 at 12:05:21PM -0500, Guido van Rossum wrote:
> > Works, too. I had a funny experience, though. I tried to quit the
> > interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.

> Really?  It didn't exit?  What had you done before?  I do this all the
> time without problems.

I remember doing 'dir()' and that's it... probably hit a few cursorkeys out
of habit. I was discussing something with a ^@#$*(*#%* suit (the
not-very-intelligent type) and our CEO (who was very interested in the
strange windows, because he thought I was doing something with ADSL :) at the
same time, so I don't remember exactly what I did. I might have hit ^D
before ^Z, though I do remember actively thinking 'must use ^Z' while
starting python, so I don't think so.

When I did roughly the same things after a reboot, all seemed fine. And
yes, I did reboot after installing, before trying things the first time.

> > And then I started IDLE, and IDLE started up, the menus worked, I could open
> > a new window, but I couldn't type anything. And then I had a bluescreen. But
> > after the reboot, everything worked fine, even doing the exact same things.
> > 
> > Could just be windows crashing on me, it does that often enough, even on
> > freshly installed machines. Something about bad karma or something ;)

> Well, Fredrik Lundh also had some blue screens which he'd reduced to a
> DECREF of NULL in _tkinter.  Buyt not fixed, so this may still be
> lurking.

The bluescreen came after my entire explorer froze up, so I'm not sure if it
has to do with python crashing. I found it particularly weird that my
'python' interpreter wouldn't exit, and the IDLE windows were working (ie,
Tk working) but not accepting input -- they shouldn't interfere with each
other, should they ?

My laptop is reasonably stable, though somethines has some strange glitches
when viewing avi/mpeg's, in particular DVD uhm, 'backups'. But I'm used to
Windows crashing whenever I touch it, so all in all, I think this:

> On the other hand your laptop might have been screwy already by that
> time...

Since all was fine after a reboot, even doing roughly the same things. I'll
see if I can hit it again sometime this weekend. (A full weekend of Python
and Packing ! No work ! Yes!) And I'll do my girl a favor and install
PySol, so she can give it a good testing :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Fri Sep  1 20:34:33 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 14:34:33 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 19:36:17 +0200."
 <39AFE911.927AEDDF@lemburg.com>
References: <39AFE911.927AEDDF@lemburg.com>
Message-ID: <200009011934.OAA02358@cj20424-a.reston1.va.home.com>

> This is pure cosmetics, but I found that the latest CVS versions
> of the Parser Makefile have become somewhat verbose.
> 
> Is this really needed ?

Like what?  What has been added?

> Also, I'd suggest adding a line
> 
> .SILENT:
> 
> to the top-level Makefile to make possible errors more visible
> (without the parser messages the Makefile messages for a clean
> run fit on a 25-line display).

I tried this, and it's to quiet -- you don't know what's going on at
all any more.  If you like this, just say "make -s".

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Fri Sep  1 19:36:37 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 20:36:37 +0200
Subject: [Python-Dev] Verbosity of the Makefile
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com>
Message-ID: <39AFF735.F9F3A252@lemburg.com>

Guido van Rossum wrote:
> 
> > This is pure cosmetics, but I found that the latest CVS versions
> > of the Parser Makefile have become somewhat verbose.
> >
> > Is this really needed ?
> 
> Like what?  What has been added?

I was referring to this output:

making Makefile in subdirectory Modules
Compiling (meta-) parse tree into NFA grammar
Making DFA for 'single_input' ...
Making DFA for 'file_input' ...
Making DFA for 'eval_input' ...
Making DFA for 'funcdef' ...
Making DFA for 'parameters' ...
Making DFA for 'varargslist' ...
Making DFA for 'fpdef' ...
Making DFA for 'fplist' ...
Making DFA for 'stmt' ...
Making DFA for 'simple_stmt' ...
Making DFA for 'small_stmt' ...
...
Making DFA for 'list_for' ...
Making DFA for 'list_if' ...
Adding FIRST sets ...
Writing graminit.c ...
Writing graminit.h ...
 
> > Also, I'd suggest adding a line
> >
> > .SILENT:
> >
> > to the top-level Makefile to make possible errors more visible
> > (without the parser messages the Makefile messages for a clean
> > run fit on a 25-line display).
> 
> I tried this, and it's to quiet -- you don't know what's going on at
> all any more.  If you like this, just say "make -s".

I know, that's what I have in my .aliases file... just thought
that it might be better to only see problems rather than hundreds
of OS commands.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Fri Sep  1 19:58:41 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 20:58:41 +0200
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <39AFF735.F9F3A252@lemburg.com>; from mal@lemburg.com on Fri, Sep 01, 2000 at 08:36:37PM +0200
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com>
Message-ID: <20000901205841.O12695@xs4all.nl>

On Fri, Sep 01, 2000 at 08:36:37PM +0200, M.-A. Lemburg wrote:

> making Makefile in subdirectory Modules
> Compiling (meta-) parse tree into NFA grammar
> Making DFA for 'single_input' ...
> Making DFA for 'file_input' ...
> Making DFA for 'eval_input' ...
> Making DFA for 'funcdef' ...
> Making DFA for 'parameters' ...
> Making DFA for 'varargslist' ...
> Making DFA for 'fpdef' ...
> Making DFA for 'fplist' ...
> Making DFA for 'stmt' ...
> Making DFA for 'simple_stmt' ...
> Making DFA for 'small_stmt' ...
> ...
> Making DFA for 'list_for' ...
> Making DFA for 'list_if' ...
> Adding FIRST sets ...
> Writing graminit.c ...
> Writing graminit.h ...

How about just removing the Grammar rule in releases ? It's only useful for
people fiddling with the Grammar, and we had a lot of those fiddles in the
last few weeks. It's not really necessary to rebuild the grammar after each
reconfigure (which is basically what the Grammar does.)

Repetitively-y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Fri Sep  1 21:11:02 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 15:11:02 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 20:36:37 +0200."
 <39AFF735.F9F3A252@lemburg.com>
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com>
 <39AFF735.F9F3A252@lemburg.com>
Message-ID: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>

> I was referring to this output:
> 
> making Makefile in subdirectory Modules
> Compiling (meta-) parse tree into NFA grammar
> Making DFA for 'single_input' ...
> Making DFA for 'file_input' ...
> Making DFA for 'eval_input' ...
> Making DFA for 'funcdef' ...
> Making DFA for 'parameters' ...
> Making DFA for 'varargslist' ...
> Making DFA for 'fpdef' ...
> Making DFA for 'fplist' ...
> Making DFA for 'stmt' ...
> Making DFA for 'simple_stmt' ...
> Making DFA for 'small_stmt' ...
> ...
> Making DFA for 'list_for' ...
> Making DFA for 'list_if' ...
> Adding FIRST sets ...
> Writing graminit.c ...
> Writing graminit.h ...

This should only happen after "make clean" right?  If it annoys you,
we could add >/dev/null to the pgen rule.

> > > Also, I'd suggest adding a line
> > >
> > > .SILENT:
> > >
> > > to the top-level Makefile to make possible errors more visible
> > > (without the parser messages the Makefile messages for a clean
> > > run fit on a 25-line display).
> > 
> > I tried this, and it's to quiet -- you don't know what's going on at
> > all any more.  If you like this, just say "make -s".
> 
> I know, that's what I have in my .aliases file... just thought
> that it might be better to only see problems rather than hundreds
> of OS commands.

-1.  It's too silent to be a good default.  Someone who first unpacks
and builds Python and is used to building other projects would wonder
why make is "hanging" without printing anything.  I've never seen a
Makefile that had this right out of the box.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From nascheme@enme.ucalgary.ca  Fri Sep  1 21:21:36 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 14:21:36 -0600
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Fri, Sep 01, 2000 at 03:11:02PM -0500
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
Message-ID: <20000901142136.A8205@keymaster.enme.ucalgary.ca>

I'm going to pipe up again about non-recursive makefiles being a good
thing.  This is another reason.

  Neil


From guido@beopen.com  Fri Sep  1 22:48:02 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 16:48:02 -0500
Subject: [Python-Dev] threadmodule.c comment error? (from comp.lang.python)
In-Reply-To: Your message of "Fri, 01 Sep 2000 00:47:03 +0200."
 <00d001c0139d$7be87900$766940d5@hagrid>
References: <00d001c0139d$7be87900$766940d5@hagrid>
Message-ID: <200009012148.QAA08086@cj20424-a.reston1.va.home.com>

> the parse tuple string doesn't quite match the error message
> given if the 2nd argument isn't a tuple.  on the other hand, the
> args argument is initialized to NULL...

I was puzzled until I realized that you mean that error lies about
the 2nd arg being optional.

I'll remove the word "optional" from the message.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  1 21:58:50 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 22:58:50 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009011631.LAA09876@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 11:31:26 AM
Message-ID: <200009012058.WAA28061@python.inrialpes.fr>

Aha. Thanks for the explanation.

Guido van Rossum wrote:
> 
> Thanks, Marc-Andre, for pointing out that Fred's lookdict code is
> actually an improvement.

Right. I was too fast. There is some speedup due to the string
specialization. I'll post a patch to SF with some more tweaks
of this implementation. Briefly:

- do not call PyErr_Clear() systematically after PyObject_Compare();
  only if (!error_restore && PyErr_Occurred())
- defer variable initializations after common return cases
- avoid using more vars in lookdict_string + specialize string_compare()
- inline the most frequest case in PyDict_GetItem (the first item probe)

> The reason for all this is that we found that lookdict() calls
> PyObject_Compare() without checking for errors.  If there's a key that
> raises an error when compared to another key, the keys compare unequal
> and an exception is set, which may disturb an exception that the
> caller of PyDict_GetItem() might be calling.  PyDict_GetItem() is
> documented as never raising an exception.  This is actually not strong
> enough; it was actually intended to never clear an exception either.
> The potential errors from PyObject_Compare() violate this contract.
> Note that these errors are nothing new; PyObject_Compare() has been
> able to raise exceptions for a long time, e.g. from errors raised by
> __cmp__().
> 
> The first-order fix is to call PyErr_Fetch() and PyErr_restore()
> around the calls to PyObject_Compare().  This is slow (for reasons
> Vladimir points out) even though Fred was very careful to only call
> PyErr_Fetch() or PyErr_Restore() when absolutely necessary and only
> once per lookdict call.  The second-order fix therefore is Fred's
> specialization for string-keys-only dicts.
> 
> There's another problem: as fixed, lookdict needs a current thread
> state!  (Because the exception state is stored per thread.)  There are
> cases where PyDict_GetItem() is called when there's no thread state!
> The first one we found was Tim Peters' patch for _PyPclose (see
> separate message).  There may be others -- we'll have to fix these
> when we find them (probably after 2.0b1 is released but hopefully
> before 2.0 final).

Hm. Question: is it possible for the thread state to swap during
PyObject_Compare()? If it is possible, things are more complicated
than I thought...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  1 22:08:14 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:08:14 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901095627.B5571@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 09:56:27 AM
Message-ID: <200009012108.XAA28091@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> I didn't submit the patch to SF yet because I am thinking of redesigning
> the gc module API.  I really don't like the current bitmask interface
> for setting options.

Why? There's nothing wrong with it.

> 
> Does anyone have any ideas on a good interface for setting various GC
> options?  There may be many options and they may change with the
> evolution of the collector.  My current idea is to use something like:
> 
>     gc.get_option(<name>)
> 
>     gc.set_option(<name>, <value>, ...)
> 
> with the module defining constants for options.  For example:
> 
>     gc.set_option(gc.DEBUG_LEAK, 1)
> 
> would enable leak debugging.  Does this look okay?  Should I try to get
> it done for 2.0?

This is too much. Don't worry, it's perfect as is.
Also, I support the idea of exporting the collected garbage for
debugging -- haven't looked at the patch though. Is it possible
to collect it subsequently?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From guido@beopen.com  Fri Sep  1 23:04:48 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 17:04:48 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 22:58:50 +0200."
 <200009012058.WAA28061@python.inrialpes.fr>
References: <200009012058.WAA28061@python.inrialpes.fr>
Message-ID: <200009012204.RAA08266@cj20424-a.reston1.va.home.com>

> Right. I was too fast. There is some speedup due to the string
> specialization. I'll post a patch to SF with some more tweaks
> of this implementation. Briefly:
> 
> - do not call PyErr_Clear() systematically after PyObject_Compare();
>   only if (!error_restore && PyErr_Occurred())

What do you mean?  The lookdict code checked in already checks
PyErr_Occurrs().

> - defer variable initializations after common return cases
> - avoid using more vars in lookdict_string + specialize string_compare()
> - inline the most frequest case in PyDict_GetItem (the first item probe)

Cool.

> Hm. Question: is it possible for the thread state to swap during
> PyObject_Compare()? If it is possible, things are more complicated
> than I thought...

Doesn't matter -- it will always swap back.  It's tied to the
interpreter lock.

Now, for truly devious code dealing with the lock and thread state,
see the changes to _PyPclose() that Tim Peters just checked in...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  1 22:16:23 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:16:23 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009012204.RAA08266@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 05:04:48 PM
Message-ID: <200009012116.XAA28130@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> > Right. I was too fast. There is some speedup due to the string
> > specialization. I'll post a patch to SF with some more tweaks
> > of this implementation. Briefly:
> > 
> > - do not call PyErr_Clear() systematically after PyObject_Compare();
> >   only if (!error_restore && PyErr_Occurred())
> 
> What do you mean?  The lookdict code checked in already checks
> PyErr_Occurrs().

Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
PyErr_Occurred() is called systematically after PyObject_Compare()
and it will evaluate to true even if the error was previously fetched.

So I mean that the test for detecting whether a *new* exception is
raised by PyObject_Compare() is (!error_restore && PyErr_Occurred())
because error_restore is set only when there's a previous exception
in place (before the call to Object_Compare). And only in this case
we need to clear the new error.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From nascheme@enme.ucalgary.ca  Fri Sep  1 22:36:12 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 15:36:12 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009012108.XAA28091@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Sep 01, 2000 at 11:08:14PM +0200
References: <20000901095627.B5571@keymaster.enme.ucalgary.ca> <200009012108.XAA28091@python.inrialpes.fr>
Message-ID: <20000901153612.A9121@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
> Also, I support the idea of exporting the collected garbage for
> debugging -- haven't looked at the patch though. Is it possible
> to collect it subsequently?

No.  Once objects are in gc.garbage they are back under the users
control.  How do you see things working otherwise?

  Neil


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  1 22:47:59 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:47:59 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901153612.A9121@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 03:36:12 PM
Message-ID: <200009012147.XAA28215@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
> > Also, I support the idea of exporting the collected garbage for
> > debugging -- haven't looked at the patch though. Is it possible
> > to collect it subsequently?
> 
> No.  Once objects are in gc.garbage they are back under the users
> control.  How do you see things working otherwise?

By putting them in gc.collected_garbage. The next collect() should be
able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
see any problems with this?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From guido@beopen.com  Fri Sep  1 23:43:29 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 17:43:29 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 23:16:23 +0200."
 <200009012116.XAA28130@python.inrialpes.fr>
References: <200009012116.XAA28130@python.inrialpes.fr>
Message-ID: <200009012243.RAA08429@cj20424-a.reston1.va.home.com>

> > > - do not call PyErr_Clear() systematically after PyObject_Compare();
> > >   only if (!error_restore && PyErr_Occurred())
> > 
> > What do you mean?  The lookdict code checked in already checks
> > PyErr_Occurrs().
> 
> Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
> PyErr_Occurred() is called systematically after PyObject_Compare()
> and it will evaluate to true even if the error was previously fetched.

No, PyErr_Fetch() clears the exception!  PyErr_Restore() restores it.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  1 22:51:47 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:51:47 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009012243.RAA08429@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 05:43:29 PM
Message-ID: <200009012151.XAA28257@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> > > > - do not call PyErr_Clear() systematically after PyObject_Compare();
> > > >   only if (!error_restore && PyErr_Occurred())
> > > 
> > > What do you mean?  The lookdict code checked in already checks
> > > PyErr_Occurrs().
> > 
> > Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
> > PyErr_Occurred() is called systematically after PyObject_Compare()
> > and it will evaluate to true even if the error was previously fetched.
> 
> No, PyErr_Fetch() clears the exception!  PyErr_Restore() restores it.

Oops, right. This saves a function call, then. Still good.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From tim_one@email.msn.com  Fri Sep  1 22:53:09 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 17:53:09 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
Message-ID: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>

As below, except the new file is

    /pub/windows/beopen-python2b1p2-20000901.exe
    5,783,115 bytes

still from anonymous FTP at python.beopen.com.  The p1 version has been
removed.

+ test_popen2 should work on Windows 2000 now (turned out that,
  as feared, MS "more" doesn't work the same way across Windows
  flavors).

+ Minor changes to the installer.

+ New LICENSE.txt and README.txt in the root of your Python
  installation.

+ Whatever other bugfixes people committed in the 8 hours since
  2b1p1 was built.

Thanks for the help so far!  We've learned that things are generally working
well, on Windows 2000 the correct one of "admin" or "non-admin" install
works & is correctly triggered by whether the user has admin privileges, and
that Thomas's Win98FE suffers infinitely more blue-screen deaths than Tim's
Win98SE ever did <wink>.

Haven't heard from anyone on Win95, Windows Me, or Windows NT yet.  And I'm
downright eager to ignore Win64 for now.

-----Original Message-----
Sent: Friday, September 01, 2000 7:35 AM
To: PythonDev; Audun.Runde@sas.com
Cc: audun@mindspring.com
Subject: [Python-Dev] Prerelease Python fun on Windows!


A prerelease of the Python2.0b1 Windows installer is now available via
anonymous FTP, from

    python.beopen.com

file

    /pub/windows/beopen-python2b1p1-20000901.exe
    5,766,988 bytes

Be sure to set FTP Binary mode before you get it.

This is not *the* release.  Indeed, the docs are still from some old
pre-beta version of Python 1.6 (sorry, Fred, but I'm really sleepy!).  What
I'm trying to test here is the installer, and the basic integrity of the
installation.  A lot has changed, and we hope all for the better.

Points of particular interest:

+ I'm running a Win98SE laptop.  The install works great for me.  How
  about NT?  2000?  95?  ME?  Win64 <shudder>?

+ For the first time ever, the Windows installer should *not* require
  adminstrator privileges under NT or 2000.  This is untested.  If you
  log in as an adminstrator, it should write Python's registry info
  under HKEY_LOCAL_MACHINE.  If not an adminstrator, it should pop up
  an informative message and write the registry info under
  HKEY_CURRENT_USER instead.  Does this work?  This prerelease includes
  a patch from Mark Hammond that makes Python look in HKCU before HKLM
  (note that that also allows users to override the HKLM settings, if
  desired).

+ Try
    python lib/test/regrtest.py

  test_socket is expected to fail if you're not on a network, or logged
  into your ISP, at the time your run the test suite.  Otherwise
  test_socket is expected to pass.  All other tests are expected to
  pass (although, as always, a number of Unix-specific tests should get
  skipped).

+ Get into a DOS-box Python, and try

      import Tkinter
      Tkinter._test()

  This installation of Python should not interfere with, or be damaged
  by, any other installation of Tcl/Tk you happen to have lying around.
  This is also the first time we're using Tcl/Tk 8.3.2, and that needs
  wider testing too.

+ If the Tkinter test worked, try IDLE!
  Start -> Programs -> Python20 -> IDLE.

+ There is no time limit on this installation.  But if you use it for
  more than 30 days, you're going to have to ask us to pay you <wink>.

windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim



_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://www.python.org/mailman/listinfo/python-dev




From skip@mojam.com (Skip Montanaro)  Fri Sep  1 23:08:05 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 1 Sep 2000 17:08:05 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901153612.A9121@keymaster.enme.ucalgary.ca>
References: <20000901095627.B5571@keymaster.enme.ucalgary.ca>
 <200009012108.XAA28091@python.inrialpes.fr>
 <20000901153612.A9121@keymaster.enme.ucalgary.ca>
Message-ID: <14768.10437.352066.987557@beluga.mojam.com>

>>>>> "Neil" == Neil Schemenauer <nascheme@enme.ucalgary.ca> writes:

    Neil> On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
    >> Also, I support the idea of exporting the collected garbage for
    >> debugging -- haven't looked at the patch though. Is it possible
    >> to collect it subsequently?

    Neil> No.  Once objects are in gc.garbage they are back under the users
    Neil> control.  How do you see things working otherwise?

Can't you just turn off gc.DEBUG_SAVEALL and reinitialize gc.garbage to []?

Skip



From nascheme@enme.ucalgary.ca  Fri Sep  1 23:10:32 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 16:10:32 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009012147.XAA28215@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Sep 01, 2000 at 11:47:59PM +0200
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca> <200009012147.XAA28215@python.inrialpes.fr>
Message-ID: <20000901161032.B9121@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
> By putting them in gc.collected_garbage. The next collect() should be
> able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
> see any problems with this?

I don't really see the point.  If someone has set the SAVEALL flag then
they are obviously debugging a program.  I don't see much point
in the GC cleaning up this garbage.  The user can do it if they like.

I have an idea for an alternate interface.  What if there was a
gc.handle_garbage hook which could be set to a function?  The collector
would pass garbage objects to this function one at a time.  If the
function returns true then it means that the garbage was handled and the
collector should not call tp_clear.  These handlers could be chained
together like import hooks.  The default handler would simply append to
the gc.garbage list.  If a debugging flag was set then all found garbage
would be passed to this function rather than just uncollectable garbage.

Skip, would a hook like this be useful to you?

  Neil


From trentm@ActiveState.com  Fri Sep  1 23:15:13 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 1 Sep 2000 15:15:13 -0700
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Sep 01, 2000 at 05:53:09PM -0400
References: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>
Message-ID: <20000901151513.B14097@ActiveState.com>

On Fri, Sep 01, 2000 at 05:53:09PM -0400, Tim Peters wrote:
> And I'm
> downright eager to ignore Win64 for now.

Works for me!

I won't get a chance to look at this for a while.

Trent


-- 
Trent Mick
TrentM@ActiveState.com


From gward@mems-exchange.org  Sat Sep  2 01:56:47 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Fri, 1 Sep 2000 20:56:47 -0400
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <20000901142136.A8205@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Fri, Sep 01, 2000 at 02:21:36PM -0600
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com> <20000901142136.A8205@keymaster.enme.ucalgary.ca>
Message-ID: <20000901205647.A27038@ludwig.cnri.reston.va.us>

On 01 September 2000, Neil Schemenauer said:
> I'm going to pipe up again about non-recursive makefiles being a good
> thing.  This is another reason.

+1 in principle.  I suspect un-recursifying Python's build system would
be a pretty conclusive demonstration of whether the "Recursive Makefiles
Considered Harmful" thesis hold water.  Want to try to hack something
together one of these days?  (Probably not for 2.0, though.)

        Greg


From m.favas@per.dem.csiro.au  Sat Sep  2 02:15:11 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Sat, 02 Sep 2000 09:15:11 +0800
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AEBD4A.55ABED9E@per.dem.csiro.au>
 <39AE07FF.478F413@per.dem.csiro.au>
 <14766.14278.609327.610929@anthem.concentric.net>
 <39AEBD01.601F7A83@per.dem.csiro.au> <14766.59597.713039.633184@anthem.concentric.net>
Message-ID: <39B0549F.DA8D07A8@per.dem.csiro.au>

"Barry A. Warsaw" wrote:
> Thanks to a quick chat with Tim, who is always quick to grasp the meat
> of the issue, we realize we need to & 0xffffffff all the 32 bit
> unsigned ints we're reading out of the .mo files.  I'll work out a
> patch, and check it in after a test on 32-bit Linux.  Watch for it,
> and please try it out on your box.

Yep - works fine on my 64-bitter (well, it certainly passes the test
<grin>)

Mark


From skip@mojam.com (Skip Montanaro)  Sat Sep  2 03:03:51 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 1 Sep 2000 21:03:51 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901161032.B9121@keymaster.enme.ucalgary.ca>
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca>
 <200009012147.XAA28215@python.inrialpes.fr>
 <20000901161032.B9121@keymaster.enme.ucalgary.ca>
Message-ID: <14768.24583.622144.16075@beluga.mojam.com>

    Neil> On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
    >> By putting them in gc.collected_garbage. The next collect() should be
    >> able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
    >> see any problems with this?

    Neil> I don't really see the point.  If someone has set the SAVEALL flag
    Neil> then they are obviously debugging a program.  I don't see much
    Neil> point in the GC cleaning up this garbage.  The user can do it if
    Neil> they like.

Agreed.

    Neil> I have an idea for an alternate interface.  What if there was a
    Neil> gc.handle_garbage hook which could be set to a function?  The
    Neil> collector would pass garbage objects to this function one at a
    Neil> time.  If the function returns true then it means that the garbage
    Neil> was handled and the collector should not call tp_clear.  These
    Neil> handlers could be chained together like import hooks.  The default
    Neil> handler would simply append to the gc.garbage list.  If a
    Neil> debugging flag was set then all found garbage would be passed to
    Neil> this function rather than just uncollectable garbage.

    Neil> Skip, would a hook like this be useful to you?

Sounds too complex for my feeble brain... ;-)

What's the difference between "found garbage" and "uncollectable garbage"?
What sort of garbage are you appending to gc.garbage now?  I thought by the
very nature of your garbage collector, anything it could free was otherwise
"uncollectable".

S


From Fredrik Lundh" <effbot@telia.com  Sat Sep  2 10:31:04 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 11:31:04 +0200
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
References: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>
Message-ID: <007901c014c0$852eff60$766940d5@hagrid>

tim wrote:
> Thomas's Win98FE suffers infinitely more blue-screen deaths than Tim's
> Win98SE ever did <wink>.

just fyi, Tkinter seems to be extremely unstable on Win95 and
Win98FE (when shut down, the python process grabs the key-
board and hangs.  the only way to kill the process is to reboot)

the same version of Tk (wish) works just fine...

</F>



From Fredrik Lundh" <effbot@telia.com  Sat Sep  2 12:32:31 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 13:32:31 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz> <39AF6C4C.62451C87@lemburg.com>
Message-ID: <01b201c014d1$7c081a00$766940d5@hagrid>

mal wrote:
> I gave some examples in the other pragma thread. The main
> idea behind "declare" is to define flags at compilation
> time, the encoding of string literals being one of the
> original motivations for introducing these flags:
>
> declare encoding = "latin-1"
> x = u"This text will be interpreted as Latin-1 and stored as Unicode"
>
> declare encoding = "ascii"
> y = u"This is supposed to be ASCII, but contains äöü Umlauts - error !"

-1

for sanity's sake, we should only allow a *single* encoding per
source file.  anything else is madness.

besides, the goal should be to apply the encoding to the entire
file, not just the contents of string literals.

(hint: how many editing and display environments support multiple
encodings per text file?)

</F>



From mal@lemburg.com  Sat Sep  2 15:01:15 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 02 Sep 2000 16:01:15 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz> <39AF6C4C.62451C87@lemburg.com> <01b201c014d1$7c081a00$766940d5@hagrid>
Message-ID: <39B1082B.4C9AB44@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > I gave some examples in the other pragma thread. The main
> > idea behind "declare" is to define flags at compilation
> > time, the encoding of string literals being one of the
> > original motivations for introducing these flags:
> >
> > declare encoding = "latin-1"
> > x = u"This text will be interpreted as Latin-1 and stored as Unicode"
> >
> > declare encoding = "ascii"
> > y = u"This is supposed to be ASCII, but contains äöü Umlauts - error !"
> 
> -1

On the "declare" concept or just the above examples ?
 
> for sanity's sake, we should only allow a *single* encoding per
> source file.  anything else is madness.

Uhm, the above was meant as two *separate* examples. I completely
agree that multiple encodings per file should not be allowed
(this would be easy to implement in the compiler).
 
> besides, the goal should be to apply the encoding to the entire
> file, not just the contents of string literals.

I'm not sure this is a good idea. 

The only parts where the encoding matters are string
literals (unless I've overlooked some important detail).
All other parts which could contain non-ASCII text such as
comments are not seen by the compiler.

So all source code encodings should really be ASCII supersets
(even if just to make editing them using a plain 8-bit editor
sane).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Vladimir.Marangozov@inrialpes.fr  Sat Sep  2 15:07:52 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 16:07:52 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901161032.B9121@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 04:10:32 PM
Message-ID: <200009021407.QAA29710@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
> > By putting them in gc.collected_garbage. The next collect() should be
> > able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
> > see any problems with this?
> 
> I don't really see the point.  If someone has set the SAVEALL flag then
> they are obviously debugging a program.  I don't see much point
> in the GC cleaning up this garbage.  The user can do it if they like.

The point is that we have two types of garbage: collectable and
uncollectable. Uncollectable garbage is already saved in gc.garbage
with or without debugging.

Uncollectable garbage is the most harmful. Fixing the program to
avoid that garbage is supposed to have top-ranked priority.

The discussion now goes on taking that one step further, i.e.
make sure that no cycles are created at all, ever. This is what
Skip wants. Skip wants to have access to the collectable garbage and
cleanup at best the code w.r.t. cycles. Fine, but collectable garbage
is priority 2 and mixing the two types of garbage is not nice. It is
not nice because the collector can deal with collectable garbage, but
gives up on the uncollectable one. This distinction in functionality
is important.

That's why I suggested to save the collectable garbage in gc.collected.

In this context, the name SAVEALL is a bit misleading. Uncollectable
garbage is already saved. What's missing is a flag & support to save
the collectable garbage. SAVECOLLECTED is a name on target.

Further, the collect() function should be able to clear gc.collected
if it is not empty and if SAVEUNCOLLECTED is not set. This should not
be perceived as a big deal, though. I see it as a nicety for overall
consistency.

> 
> I have an idea for an alternate interface.  What if there was a
> gc.handle_garbage hook which could be set to a function?  The collector
> would pass garbage objects to this function one at a time.

This is too much. The idea here is to detect garbage earlier, but given
that one can set gc.threshold(1,0,0), thus invoking the collector on
every allocation, one gets the same effect with DEBUG_LEAK. There's
little to no added value.

Such hook may also exercise the latest changes Jeremy checked in:
if an exception is raised after GC, Python will scream at you with
a fatal error. I don't think it's a good idea to mix Python and C too
much for such a low-level machinery as the garbage collector.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From nascheme@enme.ucalgary.ca  Sat Sep  2 15:08:48 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 2 Sep 2000 08:08:48 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14768.24583.622144.16075@beluga.mojam.com>; from Skip Montanaro on Fri, Sep 01, 2000 at 09:03:51PM -0500
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca> <200009012147.XAA28215@python.inrialpes.fr> <20000901161032.B9121@keymaster.enme.ucalgary.ca> <14768.24583.622144.16075@beluga.mojam.com>
Message-ID: <20000902080848.A13169@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 09:03:51PM -0500, Skip Montanaro wrote:
> What's the difference between "found garbage" and "uncollectable garbage"?

I use the term uncollectable garbage for objects that the collector
cannot call tp_clear on because of __del__ methods.  These objects are
added to gc.garbage (actually, just the instances).  If SAVEALL is
enabled then all objects found are saved in gc.garbage and tp_clear is
not called.

Here is an example of how to use my proposed handle_garbage hook:

	class Vertex:
		def __init__(self):
			self.edges = []
		def add_edge(self, e):
			self.edges.append(e)
		def __del__(self):
			do_something()

	class Edge:
		def __init__(self, vertex_in, vertex_out):
			self.vertex_in = vertex_in
			vertex_in.add_edget(self)
			self.vertex_out = vertex_out
			vertex_out.add_edget(self)
			
This graph structure contains cycles and will not be collected by
reference counting.  It is also "uncollectable" because it contains a
finalizer on a strongly connected component (ie. other objects in the
cycle are reachable from the __del__ method).  With the current garbage
collector, instances of Edge and Vertex will appear in gc.garbage when
found to be unreachable by the rest of Python.  The application could
then periodicly do:

	for obj in gc.garbage:
		if isinstance(obj, Vertex):
			obj.__dict__.clear()

which would break the reference cycles.  If a handle_garbage hook
existed the application could do:

	def break_graph_cycle(obj, next=gc.handle_garbage):
		if isinstance(obj, Vertex):
			obj.__dict__.clear()
			return 1
		else:
			return next(obj)
	gc.handle_garbage = break_graph_cycle

If you had a leaking program you could use this hook to debug it:

	def debug_cycle(obj, next=gc.handle_garbage):
		print "garbage:", repr(obj)
		return gc.handle_garbage

The hook seems to be more general than the gc.garbage list.

  Neil



	





> What sort of garbage are you appending to gc.garbage now?  I thought by the
> very nature of your garbage collector, anything it could free was otherwise
> "uncollectable".
> 
> S


From Vladimir.Marangozov@inrialpes.fr  Sat Sep  2 15:37:18 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 16:37:18 +0200 (CEST)
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <20000901094821.A5571@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 09:48:21 AM
Message-ID: <200009021437.QAA29774@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
> > Even people who do have problems with cyclic garbage don't necessarily
> > need a collection every 100 allocations.  (Is my understanding of what
> > the threshold measures correct?)
> 
> It collects every net threshold0 allocations.  If you create and delete
> 1000 container objects in a loop then no collection would occur.
> 
> > But the difference in total memory consumption with the threshold at
> > 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

A few megabytes?  Phew! Jeremy -- more power mem to you!
I agree with Neil. 5000 is too high and the purpose of the inclusion
of the collector in the beta is precisely to exercise it & get feedback!
With a threshold of 5000 you've almost disabled the collector, leaving us
only with the memory overhead and the slowdown <wink>.

In short, bring it back to something low, please.

[Neil]
> A portable way to find the total allocated memory would be nice.
> Perhaps Vladimir's malloc will help us here.

Yep, the mem profiler. The profiler currently collects stats if
enabled. This is slow and unusable in production code. But if the
profiler is disabled, Python runs at full speed. However, the profiler
will include an interface which will ask the mallocs on how much real
mem they manage. This is not implemented yet... Maybe the real mem
interface should go in a separate 'memory' module; don't know yet.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Sat Sep  2 16:00:47 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 17:00:47 +0200 (CEST)
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 01, 2000 05:53:09 PM
Message-ID: <200009021500.RAA00776@python.inrialpes.fr>

Tim Peters wrote:
> 
> As below, except the new file is
> 
>     /pub/windows/beopen-python2b1p2-20000901.exe
>     5,783,115 bytes
> 
> still from anonymous FTP at python.beopen.com.  The p1 version has been
> removed.

In case my feedback matters, being a Windows amateur, the installation
went smoothly on my home P100 with some early Win95 pre-release. In the
great Windows tradition, I was asked to reboot & did so. The regression
tests passed in console mode. Then launched successfully IDLE. In IDLE
I get *beep* sounds every time I hit RETURN without typing anything.
I was able to close both the console and IDLE without problems. Haven't
tried the uninstall link, though.

don't-ask-me-any-questions-about-Windows'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From guido@beopen.com  Sat Sep  2 16:56:30 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 10:56:30 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 20:56:47 -0400."
 <20000901205647.A27038@ludwig.cnri.reston.va.us>
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com> <20000901142136.A8205@keymaster.enme.ucalgary.ca>
 <20000901205647.A27038@ludwig.cnri.reston.va.us>
Message-ID: <200009021556.KAA02142@cj20424-a.reston1.va.home.com>

> On 01 September 2000, Neil Schemenauer said:
> > I'm going to pipe up again about non-recursive makefiles being a good
> > thing.  This is another reason.

Greg Ward:
> +1 in principle.  I suspect un-recursifying Python's build system would
> be a pretty conclusive demonstration of whether the "Recursive Makefiles
> Considered Harmful" thesis hold water.  Want to try to hack something
> together one of these days?  (Probably not for 2.0, though.)

To me this seems like a big waste of time.

I see nothing broken with the current setup.  The verbosity is taken
care of by "make -s", for individuals who don't want Make saying
anything.  Another useful option is "make --no-print-directory"; this
removes Make's noisiness about changing directories.  If the pgen
output really bothers you, then let's direct it to /dev/null.  None of
these issues seem to require getting rid of the Makefile recursion.

If it ain't broken, don't fix it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Sat Sep  2 17:00:29 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 11:00:29 -0500
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: Your message of "Sat, 02 Sep 2000 17:00:47 +0200."
 <200009021500.RAA00776@python.inrialpes.fr>
References: <200009021500.RAA00776@python.inrialpes.fr>
Message-ID: <200009021600.LAA02199@cj20424-a.reston1.va.home.com>

[Vladimir]

> In IDLE I get *beep* sounds every time I hit RETURN without typing
> anything.

This appears to be a weird side effect of the last change I made in
IDLE:

----------------------------
revision 1.28
date: 2000/03/07 18:51:49;  author: guido;  state: Exp;  lines: +24 -0
Override the Undo delegator to forbid any changes before the I/O mark.
It beeps if you try to insert or delete before the "iomark" mark.
This makes the shell less confusing for newbies.
----------------------------

I hope we can fix this before 2.0 final goes out...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From skip@mojam.com (Skip Montanaro)  Sat Sep  2 16:09:49 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sat, 2 Sep 2000 10:09:49 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021407.QAA29710@python.inrialpes.fr>
References: <20000901161032.B9121@keymaster.enme.ucalgary.ca>
 <200009021407.QAA29710@python.inrialpes.fr>
Message-ID: <14769.6205.428574.926100@beluga.mojam.com>

    Vlad> The discussion now goes on taking that one step further, i.e.
    Vlad> make sure that no cycles are created at all, ever. This is what
    Vlad> Skip wants. Skip wants to have access to the collectable garbage
    Vlad> and cleanup at best the code w.r.t. cycles. 

If I read my (patched) version of gcmodule.c correctly, with the
gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not just
the stuff with __del__ methods.  In delete_garbage I see

    if (debug & DEBUG_SAVEALL) {
	    PyList_Append(garbage, op);
    } else {
            ... usual collection business here ...
    }

Skip


From Vladimir.Marangozov@inrialpes.fr  Sat Sep  2 16:43:05 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 17:43:05 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14769.6205.428574.926100@beluga.mojam.com> from "Skip Montanaro" at Sep 02, 2000 10:09:49 AM
Message-ID: <200009021543.RAA01638@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> If I read my (patched) version of gcmodule.c correctly, with the
> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not just
> the stuff with __del__ methods.

Yes. And you don't know which objects are collectable and which ones
are not by this collector. That is, SAVEALL transforms the collector
in a cycle detector. The collectable and uncollectable objects belong
to two disjoint sets. I was arguing about this distinction, because
collectable garbage is not considered garbage any more, uncollectable
garbage is the real garbage left, but if you think this distinction
doesn't serve you any purpose, fine.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Fredrik Lundh" <effbot@telia.com  Sat Sep  2 17:05:33 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 18:05:33 +0200
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
Message-ID: <029001c014f7$a203a780$766940d5@hagrid>

paul prescod spotted this discrepancy:

from the documentation:

    start ([group]) 
    end ([group]) 
        Return the indices of the start and end of the
        substring matched by group; group defaults to
        zero (meaning the whole matched substring). Return
        None if group exists but did not contribute to the
        match.

however, it turns out that PCRE doesn't do what it's
supposed to:

>>> import pre
>>> m = pre.match("(a)|(b)", "b")
>>> m.start(1)
-1

unlike SRE:

>>> import sre
>>> m = sre.match("(a)|(b)", "b")
>>> m.start(1)
>>> print m.start(1)
None

this difference breaks 1.6's pyclbr (1.5.2's pyclbr works
just fine with SRE, though...)

:::

should I fix SRE and ask Fred to fix the docs, or should
someone fix pyclbr and maybe even PCRE?

</F>



From guido@beopen.com  Sat Sep  2 18:18:48 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 12:18:48 -0500
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
In-Reply-To: Your message of "Sat, 02 Sep 2000 18:05:33 +0200."
 <029001c014f7$a203a780$766940d5@hagrid>
References: <029001c014f7$a203a780$766940d5@hagrid>
Message-ID: <200009021718.MAA02318@cj20424-a.reston1.va.home.com>

> paul prescod spotted this discrepancy:
> 
> from the documentation:
> 
>     start ([group]) 
>     end ([group]) 
>         Return the indices of the start and end of the
>         substring matched by group; group defaults to
>         zero (meaning the whole matched substring). Return
>         None if group exists but did not contribute to the
>         match.
> 
> however, it turns out that PCRE doesn't do what it's
> supposed to:
> 
> >>> import pre
> >>> m = pre.match("(a)|(b)", "b")
> >>> m.start(1)
> -1
> 
> unlike SRE:
> 
> >>> import sre
> >>> m = sre.match("(a)|(b)", "b")
> >>> m.start(1)
> >>> print m.start(1)
> None
> 
> this difference breaks 1.6's pyclbr (1.5.2's pyclbr works
> just fine with SRE, though...)
> 
> :::
> 
> should I fix SRE and ask Fred to fix the docs, or should
> someone fix pyclbr and maybe even PCRE?

I'd suggest fix SRE and the docs, because -1 is a more useful
indicator for "no match" than None: it has the same type as valid
indices.  It makes it easier to adapt to static typing later.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Fredrik Lundh" <effbot@telia.com  Sat Sep  2 17:54:57 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 18:54:57 +0200
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
References: <029001c014f7$a203a780$766940d5@hagrid>  <200009021718.MAA02318@cj20424-a.reston1.va.home.com>
Message-ID: <02d501c014fe$88aa8860$766940d5@hagrid>

[me]
> > from the documentation:
> > 
> >     start ([group]) 
> >     end ([group]) 
> >         Return the indices of the start and end of the
> >         substring matched by group; group defaults to
> >         zero (meaning the whole matched substring). Return
> >         None if group exists but did not contribute to the
> >         match.
> > 
> > however, it turns out that PCRE doesn't do what it's
> > supposed to:
> > 
> > >>> import pre
> > >>> m = pre.match("(a)|(b)", "b")
> > >>> m.start(1)
> > -1

[guido]
> I'd suggest fix SRE and the docs, because -1 is a more useful
> indicator for "no match" than None: it has the same type as valid
> indices.  It makes it easier to adapt to static typing later.

sounds reasonable.  I've fixed the code, leaving the docs to Fred.

this should probably go into 1.6 as well, since pyclbr depends on
it (well, I assume it does -- the pyclbr in the current repository
does, but maybe it's only been updated in the 2.0 code base?)

</F>



From jeremy@beopen.com  Sat Sep  2 18:33:47 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Sat, 2 Sep 2000 13:33:47 -0400
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <200009021437.QAA29774@python.inrialpes.fr>
Message-ID: <AJEAKILOCCJMDILAPGJNEEKFCBAA.jeremy@beopen.com>

Vladimir Marangozov wrote:
>Neil Schemenauer wrote:
>>
>> On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
>> > Even people who do have problems with cyclic garbage don't necessarily
>> > need a collection every 100 allocations.  (Is my understanding of what
>> > the threshold measures correct?)
>>
>> It collects every net threshold0 allocations.  If you create and delete
>> 1000 container objects in a loop then no collection would occur.
>>
>> > But the difference in total memory consumption with the threshold at
>> > 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.
>
>A few megabytes?  Phew! Jeremy -- more power mem to you!
>I agree with Neil. 5000 is too high and the purpose of the inclusion
>of the collector in the beta is precisely to exercise it & get feedback!
>With a threshold of 5000 you've almost disabled the collector, leaving us
>only with the memory overhead and the slowdown <wink>.
>
>In short, bring it back to something low, please.

I am happy to bring it to a lower number, but not as low as it was.  I
increased it forgetting that it was net allocations and not simply
allocations.  Of course, it's not exactly net allocations because if
deallocations occur while the count is zero, they are ignored.

My reason for disliking the previous lower threshold is that it causes
frequently collections, even in programs that produce no cyclic garbage.  I
understand the garbage collector to be a supplement to the existing
reference counting mechanism, which we expect to work correctly for most
programs.

The benefit of collecting the cyclic garbage periodically is to reduce the
total amount of memory the process uses, by freeing some memory to be reused
by malloc.  The specific effect on process memory depends on the program's
high-water mark for memory use and how much of that memory is consumed by
cyclic trash.  (GC also allows finalization to occur where it might not have
before.)

In one test I did, the difference between the high-water mark for a program
that run with 3000 GC collections and 300 GC collections was 13MB and 11MB,
a little less than 20%.

The old threshold (100 net allocations) was low enough that most scripts run
several collections during compilation of the bytecode.  The only containers
created during compilation (or loading .pyc files) are the dictionaries that
hold constants.  If the GC is supplemental, I don't believe its threshold
should be set so low that it runs long before any cycles could be created.

The default threshold can be fairly high, because a program that has
problems caused by cyclic trash can set the threshold lower or explicitly
call the collector.  If we assume these programs are less common, there is
no reason to make all programs suffer all of the time.

I have trouble reasoning about the behavior of the pseudo-net allocations
count, but think I would be happier with a higher threshold.  I might find
it easier to understand if the count where of total allocations and
deallocations, with GC occurring every N allocation events.

Any suggestions about what a more reasonable value would be and why it is
reasonable?

Jeremy




From skip@mojam.com (Skip Montanaro)  Sat Sep  2 18:43:06 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sat, 2 Sep 2000 12:43:06 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021543.RAA01638@python.inrialpes.fr>
References: <14769.6205.428574.926100@beluga.mojam.com>
 <200009021543.RAA01638@python.inrialpes.fr>
Message-ID: <14769.15402.630192.4454@beluga.mojam.com>

    Vlad> Skip Montanaro wrote:
    >> 
    >> If I read my (patched) version of gcmodule.c correctly, with the
    >> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not
    >> just the stuff with __del__ methods.

    Vlad> Yes. And you don't know which objects are collectable and which
    Vlad> ones are not by this collector. That is, SAVEALL transforms the
    Vlad> collector in a cycle detector. 

Which is precisely what I want.  I'm trying to locate cycles in a
long-running program.  In that environment collectable and uncollectable
garbage are just as bad since I still use 1.5.2 in production.

Skip


From tim_one@email.msn.com  Sat Sep  2 19:20:18 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 14:20:18 -0400
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <AJEAKILOCCJMDILAPGJNEEKFCBAA.jeremy@beopen.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKDHDAA.tim_one@email.msn.com>

[Neil and Vladimir say a threshold of 5000 is too high!]

[Jeremy says a threshold of 100 is too low!]

[merriment ensues]

> ...
> Any suggestions about what a more reasonable value would be and why
> it is reasonable?
>
> Jeremy

There's not going to be consensus on this, as the threshold is a crude handle on a complex
problem.  That's sure better than *no* handle, but trash behavior is so app-specific that
there simply won't be a killer argument.

In cases like this, the geometric mean of the extreme positions is always the best guess
<0.8 wink>:

>>> import math
>>> math.sqrt(5000 * 100)
707.10678118654755
>>>

So 9 times out of 10 we can run it with a threshold of 707, and 1 out of 10 with 708
<wink>.

Tuning strategies for gc *can* get as complex as OS scheduling algorithms, and for the
same reasons:  you're in the business of predicting the future based on just a few neurons
keeping track of gross summaries of what happened before.  A program can go through many
phases of quite different behavior over its life (like I/O-bound vs compute-bound, or
cycle-happy vs not), and at the phase boundaries past behavior is worse than irrelevant
(it's actively misleading).

So call it 700 for now.  Or 1000.  It's a bad guess at a crude heuristic regardless, and
if we avoid extreme positions we'll probably avoid doing as much harm as we *could* do
<0.9 wink>.  Over time, a more interesting measure may be how much cyclic trash
collections actually recover, and then collect less often the less trash we're finding
(ditto more often when we're finding more).  Another is like that, except replace "trash"
with "cycles (whether trash or not)".  The gross weakness of "net container allocations"
is that it doesn't directly measure what this system was created to do.

These things *always* wind up with dynamic measures, because static ones are just too
crude across apps.  Then the dynamic measures fail at phase boundaries too, and more
gimmicks are added to compensate for that.  Etc.  Over time it will get better for most
apps most of the time.  For now, we want *both* to exercise the code in the field and not
waste too much time, so hasty compromise is good for the beta.

let-a-thousand-thresholds-bloom-ly y'rs  - tim




From tim_one@email.msn.com  Sat Sep  2 19:46:33 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 14:46:33 -0400
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
In-Reply-To: <02d501c014fe$88aa8860$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEKFHDAA.tim_one@email.msn.com>

[start/end (group)  documented to return None for group that
 didn't participate in the match
 sre does this
 pre actually returned -1
 this breaks pyclbr.py
 Guido sez pre's behavior is better & the docs should be changed
]

[/F]
> sounds reasonable.  I've fixed the code, leaving the docs to Fred.
>
> this should probably go into 1.6 as well, since pyclbr depends on
> it (well, I assume it does -- the pyclbr in the current repository
> does, but maybe it's only been updated in the 2.0 code base?)

Good point.  pyclbr got changed last year, to speed it and make it more robust for IDLE's
class browser display.  Which has another curious role to play in this screwup!  When
rewriting pyclbr's parsing, I didn't remember what start(group) would do for a
non-existent group.  In the old days I would have looked up the docs.  But since I had
gotten into the habit of *living* in an IDLE box all day, I just tried it instead and
"ah! -1 ... makes sense, I'll use that" was irresistible.  Since any code relying on the
docs would not have worked (None is the wrong type, and even the wrong value viewed as
boolean), the actual behavior should indeed win here.




From cgw@fnal.gov  Sat Sep  2 16:27:53 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Sat, 2 Sep 2000 10:27:53 -0500 (CDT)
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009021556.KAA02142@cj20424-a.reston1.va.home.com>
References: <39AFE911.927AEDDF@lemburg.com>
 <200009011934.OAA02358@cj20424-a.reston1.va.home.com>
 <39AFF735.F9F3A252@lemburg.com>
 <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
 <20000901142136.A8205@keymaster.enme.ucalgary.ca>
 <20000901205647.A27038@ludwig.cnri.reston.va.us>
 <200009021556.KAA02142@cj20424-a.reston1.va.home.com>
Message-ID: <14769.7289.688557.827915@buffalo.fnal.gov>

Guido van Rossum writes:

 > To me this seems like a big waste of time.
 > I see nothing broken with the current setup. 

I've built Python on every kind of system we have at FNAL, which means
Linux, several versions of Solaris, IRIX, DEC^H^H^HCompaq OSF/1, even
(shudder) WinNT, and the only complaint I've ever had with the build
system is that it doesn't do a "make depend" automatically.  (I don't
care too much about the dependencies on system headers, but the
Makefiles should at least know about the dependencies on Python's own
.h files, so when you change something like opcode.h or node.h it is
properly handled.  Fred got bitten by this when he tried to apply the
EXTENDED_ARG patch.)

Personally, I think that the "recurive Mke considered harmful" paper
is a bunch of hot air.  Many highly successful projects - the Linux
kernel, glibc, etc - use recursive Make.

 > If it ain't broken, don't fix it!

Amen!


From cgw@fnal.gov  Fri Sep  1 20:19:58 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 14:19:58 -0500 (CDT)
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
References: <39AFE911.927AEDDF@lemburg.com>
 <200009011934.OAA02358@cj20424-a.reston1.va.home.com>
 <39AFF735.F9F3A252@lemburg.com>
 <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
Message-ID: <14768.350.21353.538473@buffalo.fnal.gov>

For what it's worth, lots of verbosity in the Makefile makes me happy.
But I'm a verbose sort of guy...

(Part of the reason for sending this is to test if my mail is going
through.  Looks like there's currently no route from fnal.gov to
python.org, I wonder where the problem is?)


From cgw@fnal.gov  Fri Sep  1 17:06:48 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 11:06:48 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <20000901114945.A15688@ludwig.cnri.reston.va.us>
References: <14766.50976.102853.695767@buffalo.fnal.gov>
 <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
 <20000901114945.A15688@ludwig.cnri.reston.va.us>
Message-ID: <14767.54296.278370.953550@buffalo.fnal.gov>

Greg Ward wrote:

 > ...but the sound is horrible: various people opined on this list, many
 > months ago when I first reported the problem, that it's probably a
 > format problem.  (The wav/au mixup seems a likely candidate; it can't be
 > an endianness problem, since the .au file is 8-bit!)

Did you see the msg I sent yesterday?  (Maybe I send out too many mails)

I'm 99.9% sure it's a format problem, because if you replace
"audiotest.au" with some random ".wav" file, it works. (On my system
anyhow, with pretty generic cheapo soundblaster)

The code in test_linuxaudiodev.py has no chance of ever working
correctly, if you send mu-law encoded (i.e. logarithmic) data to a
device expecting linear, you will get noise.  You have to set the
format first. And, the functions in linuxaudiodev which are intended
to set the format don't work, and go against what is reccommended in
the OSS programming documentation.

IMHO this code is up for a complete rewrite, which I will submit post
2.0.  

The quick-and-dirty fix for the 2.0 release is to include
"audiotest.wav" and modify test_linuxaudiodev.au.


Ka-Ping Yee <ping@lfw.org> wrote:
> Are you talking about OSS vs. ALSA?  Didn't they at least try to
> keep some of the basic parts of the interface the same?

No, I'm talking about SoundBlaster8 vs. SoundBlaster16
vs. ProAudioSpectrum vs. Gravis vs. AdLib vs. TurtleBeach vs.... you
get the idea.  You can't know what formats are supported until you
probe the hardware.  Most of these cards *don't* handle logarithmic
data; and *then* depending on whether you have OSS or Alsa there may be
driver-side code to convert logarithmic data to linear before sending
it to the hardware.

The lowest-common-denominator, however, is raw 8-bit linear unsigned
data, which tends to be supported on all PC audio hardware.







From cgw@fnal.gov  Fri Sep  1 17:09:02 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 11:09:02 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.54177.584090.198596@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
 <14766.53381.634928.615048@buffalo.fnal.gov>
 <14766.54177.584090.198596@beluga.mojam.com>
Message-ID: <14767.54430.927663.710733@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 > Makes no difference:
 > 
 >     % ulimit -a
 >     stack size (kbytes)         unlimited
 >     % ./python Misc/find_recursionlimit.py
 >     Limit of 2400 is fine
 >     repr
 >     Segmentation fault
 > 
 > Skip

This means that you're not hitting the rlimit at all but getting a
real segfault!  Time to do setrlimit -c unlimited and break out GDB,
I'd say.


From cgw@fnal.gov  Fri Sep  1 00:01:22 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 18:01:22 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
References: <14766.50976.102853.695767@buffalo.fnal.gov>
 <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
Message-ID: <14766.58306.977241.439169@buffalo.fnal.gov>

Ka-Ping Yee writes:

 > Side note: is there a well-defined platform-independent sound
 > interface we should be conforming to?  It would be nice to have a
 > single Python function for each of the following things:
 > 
 >     1. Play a .wav file given its filename.
 > 
 >     2. Play a .au file given its filename.

These may be possible.

 >     3. Play some raw audio data, given a string of bytes and a
 >        sampling rate.

This would never be possible unless you also specifed the format and
encoding of the raw data - are they 8bit, 16-bit, signed, unsigned,
bigendian, littlendian, linear, logarithmic ("mu_law"), etc?

Not only that, but some audio hardware will support some formats and
not others.  Some sound drivers will attempt to convert from a data
format which is not supported by the audio hardware to one which is,
and others will just reject the data if it's not in a format supported
by the hardware.  Trying to do anything with sound in a
platform-independent manner is near-impossible.  Even the same
"platform" (e.g. RedHat 6.2 on Intel) will behave differently
depending on what soundcard is installed.


From skip@mojam.com (Skip Montanaro)  Sat Sep  2 21:37:54 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sat, 2 Sep 2000 15:37:54 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14767.54430.927663.710733@buffalo.fnal.gov>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
 <14766.53381.634928.615048@buffalo.fnal.gov>
 <14766.54177.584090.198596@beluga.mojam.com>
 <14767.54430.927663.710733@buffalo.fnal.gov>
Message-ID: <14769.25890.529541.831812@beluga.mojam.com>

    >> % ulimit -a
    >> stack size (kbytes)         unlimited
    >> % ./python Misc/find_recursionlimit.py
    >> ...
    >> Limit of 2400 is fine
    >> repr
    >> Segmentation fault

    Charles> This means that you're not hitting the rlimit at all but
    Charles> getting a real segfault!  Time to do setrlimit -c unlimited and
    Charles> break out GDB, I'd say.

Running the program under gdb does no good.  It segfaults and winds up with
a corrupt stack as far as the debugger is concerned.  For some reason bash
won't let me set a core file size != 0 either:

    % ulimit -c
    0
    % ulimit -c unlimited
    % ulimit -c
    0

though I doubt letting the program dump core would be any better
debugging-wise than just running the interpreter under gdb's control.

Kinda weird.

Skip


From thomas@xs4all.net  Sat Sep  2 22:36:47 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 2 Sep 2000 23:36:47 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14767.54430.927663.710733@buffalo.fnal.gov>; from cgw@fnal.gov on Fri, Sep 01, 2000 at 11:09:02AM -0500
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net> <14766.53002.467504.523298@beluga.mojam.com> <14766.53381.634928.615048@buffalo.fnal.gov> <14766.54177.584090.198596@beluga.mojam.com> <14767.54430.927663.710733@buffalo.fnal.gov>
Message-ID: <20000902233647.Q12695@xs4all.nl>

On Fri, Sep 01, 2000 at 11:09:02AM -0500, Charles G Waldman wrote:
> Skip Montanaro writes:
>  > Makes no difference:

>  >     stack size (kbytes)         unlimited
>  >     % ./python Misc/find_recursionlimit.py
>  >     Limit of 2400 is fine
>  >     repr
>  >     Segmentation fault

> This means that you're not hitting the rlimit at all but getting a
> real segfault!  Time to do setrlimit -c unlimited and break out GDB,
> I'd say.

Yes, which I did (well, my girlfriend was hogging the PC with 'net
connection, and there was nothing but silly soft-porn on TV, so I spent an
hour or two on my laptop ;) and I did figure out the problem isn't
stackspace (which was already obvious) but *damned* if I know what the
problem is. 

Here's an easy way to step through the whole procedure, though. Take a
recursive script, like the one Guido posted:

    i = 0
    class C:
      def __getattr__(self, name):
          global i
          print i
          i += 1
          return self.name # common beginners' mistake

Run it once, so you get a ballpark figure on when it'll crash, and then
branch right before it would crash, calling some obscure function
(os.getpid() works nicely, very simple function.) This was about 2926 or so
on my laptop (adding the branch changed this number, oddly enough.)

    import os
    i = 0
    class C:
      def __getattr__(self, name):
          global i
          print i
          i += 1
          if (i > 2625):
              os.getpid()
          return self.name # common beginners' mistake

(I also moved the 'print i' to inside the branch, saved me a bit of
scrollin') Then start GDB on the python binary, set a breakpoint on
posix_getpid, and "run 'test.py'". You'll end up pretty close to where the
interpreter decides to go bellyup. Setting a breakpoint on ceval.c line 612
(the "opcode = NEXTOP();' line) or so at that point helps doing a
per-bytecode check, though this made me miss the actual point of failure,
and I don't fancy doing it again just yet :P What I did see, however, was
that the reason for the crash isn't the pure recursion. It looks like the
recursiveness *does* get caught properly, and the interpreter raises an
error. And then prints that error over and over again, probably once for
every call to getattr(), and eventually *that* crashes (but why, I don't
know. In one test I did, it crashed in int_print, the print function for int
objects, which did 'fprintf(fp, "%ld", v->ival);'. The actual SEGV arrived
inside fprintf's internals. v->ival was a valid integer (though a high one)
and the problem was not derefrencing 'v'. 'fp' was stderr, according to its
_fileno member.

'ltrace' (if you have it) is also a nice tool to let loose on this kind of
script, by the way, though it does make the test take a lot longer, and you
really need enough diskspace to store the output ;-P

Back-to-augassign-docs-ly y'rs,

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Vladimir.Marangozov@inrialpes.fr  Sat Sep  2 23:06:41 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sun, 3 Sep 2000 00:06:41 +0200 (CEST)
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEKDHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 02, 2000 02:20:18 PM
Message-ID: <200009022206.AAA02255@python.inrialpes.fr>

Tim Peters wrote:
>
> There's not going to be consensus on this, as the threshold is a crude 
> handle on a complex problem.  

Hehe. Tim gets philosophic again <wink>  

>
> In cases like this, the geometric mean of the extreme positions is 
> always the best guess <0.8 wink>:
> 
> >>> import math
> >>> math.sqrt(5000 * 100)
> 707.10678118654755
> >>>
>
> So 9 times out of 10 we can run it with a threshold of 707, and 1 out of 10 
> with 708 <wink>.
> 
> Tuning strategies for gc *can* get as complex as OS scheduling algorithms, 
> and for the same reasons:  you're in the business of predicting the future 
> based on just a few neurons keeping track of gross summaries of what 
> happened before. 
> ...
> [snip]

Right on target, Tim! It is well known that the recent past is the best 
approximation of the near future and that the past as a whole is the only
approximation we have at our disposal of the long-term future. If you add 
to that axioms like "memory management schemes influence the OS long-term 
scheduler", "the 50% rule applies for all allocation strategies", etc.,
it is clear that if we want to approach the optimum, we definitely need
to adjust the collection frequency according to some proportional scheme.

But even without saying this, your argument about dynamic GC thresholds
is enough to put Neil into a state of deep depression regarding the
current GC API <0.9 wink>.

Now let's be pragmatic: it is clear that the garbage collector will
make it for 2.0 -- be it enabled or disabled by default. So let's stick
to a compromise: 500 for the beta, 1000 for the final release. This
somewhat complies to your geometric calculus which mainly aims at
balancing the expressed opinions. It certainly isn't fond regarding
any existing theory or practice, and we all realized that despite the
impressive math.sqrt() <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From cgw@alum.mit.edu  Sun Sep  3 01:52:33 2000
From: cgw@alum.mit.edu (Charles G Waldman)
Date: Sat, 2 Sep 2000 19:52:33 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000902233647.Q12695@xs4all.nl>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
 <14766.53381.634928.615048@buffalo.fnal.gov>
 <14766.54177.584090.198596@beluga.mojam.com>
 <14767.54430.927663.710733@buffalo.fnal.gov>
 <20000902233647.Q12695@xs4all.nl>
Message-ID: <14769.41169.108895.723628@sirius.net.home>

I said:
 > This means that you're not hitting the rlimit at all but getting a 
 > real segfault!  Time to do setrlimit -c unlimited and break out GDB, 
 > I'd say.   
 
Thomas Wouters came back with: 
> I did figure out the problem isn't stackspace (which was already
> obvious) but *damned* if I know what the problem is.  I don't fancy
> doing it again just yet :P:P What I did see, however, was that the
> reason for the crash isn't the pure recursion. It looks like the
> recursiveness *does* get caught properly, and the interpreter raises
> an error. And then prints that error over and over again, probably
> once for every call to getattr(), and eventually *that* crashes (but
> why, I don't know. In one test I did, it crashed in int_print, the
> print function for int objects, which did 'fprintf(fp, "%ld",
> v->ival);'. The actual SEGV arrived inside fprintf's
> internals. v->ival was a valid integer (though a high one) and the
> problem was not derefrencing 'v'. 'fp' was stderr, according to its
> _fileno member.
 
I've got some more info: this crash only happens if you have built
with --enable-threads.  This brings in a different (thread-safe)
version of fprintf, which uses mutex locks on file objects so output
from different threads doesn't get scrambled together.  And the SEGV
that I saw was happening exactly where fprintf is trying to unlock the
mutex on stderr, so it can print "Maximum recursion depth exceeded".
 
This looks like more ammo for Guido's theory that there's something 
wrong with libpthread on linux, and right now I'm elbows-deep in the 
guts of libpthread trying to find out more.  Fun little project for a
Saturday night ;-)      
 
> 'ltrace' (if you have it) is also a nice tool to let loose on this
> kind of script, by the way, though it does make the test take a lot
> longer, and you really need enough diskspace to store the output ;-P
 
Sure, I've got ltrace, and also more diskspace than you really want to 
know about!

Working-at-a-place-with-lots-of-machines-can-be-fun-ly yr's,
					
					-Charles
 



From m.favas@per.dem.csiro.au  Sun Sep  3 01:53:11 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Sun, 03 Sep 2000 08:53:11 +0800
Subject: [Python-Dev] failure in test_sre???
Message-ID: <39B1A0F7.D8FF0076@per.dem.csiro.au>

Is it just me, or is test_sre meant to fail, following the recent
changes to _sre.c?

Short failure message:
test test_sre failed -- Writing: 'sre.match("\\x%02x" % i, chr(i)) !=
None', expected: ''

Full failure messages:
Running tests on character literals
sre.match("\x%02x" % i, chr(i)) != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape
sre.match("\x%02x0" % i, chr(i)+"0") != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape
sre.match("\x%02xz" % i, chr(i)+"z") != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape

(the above sequence is repeated another 7 times) 

-- 
Mark


From m.favas@per.dem.csiro.au  Sun Sep  3 03:05:03 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Sun, 03 Sep 2000 10:05:03 +0800
Subject: [Python-Dev] Namespace collision between lib/xml and
 site-packages/xml
References: <200009010400.XAA30273@cj20424-a.reston1.va.home.com>
Message-ID: <39B1B1CF.572955FC@per.dem.csiro.au>

Guido van Rossum wrote:
> 
> You might be able to get the old XML-sig code to override the core xml
> package by creating a symlink named _xmlplus to it in site-packages
> though.

Nope - doing this allows the imports to succeed where before they were
failing, but I get a "SAXException: No parsers found" failure now. No
big deal - I'll probably rename the xml-sig stuff and include it in my
app.

-- 
Mark


From tim_one@email.msn.com  Sun Sep  3 04:18:31 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 23:18:31 -0400
Subject: [Python-Dev] failure in test_sre???
In-Reply-To: <39B1A0F7.D8FF0076@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELCHDAA.tim_one@email.msn.com>

[Mark Favas, on new test_sre failures]
> Is it just me, or is test_sre meant to fail, following the recent
> changes to _sre.c?

Checkins are never supposed to leave the test suite in a failing state, but
while that's "the rule" it's still too rarely the reality (although *much*
better than it was just a month ago -- whining works <wink>).  Offhand these
look like shallow new failures to me, related to /F's so-far partial
implemention of PEP 223 (Change the Meaning of \x Escapes).  I'll dig into a
little more.  Rest assured it will get fixed before the 2.0b1 release!

> Short failure message:
> test test_sre failed -- Writing: 'sre.match("\\x%02x" % i, chr(i)) !=
> None', expected: ''
>
> Full failure messages:
> Running tests on character literals
> sre.match("\x%02x" % i, chr(i)) != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
> sre.match("\x%02x0" % i, chr(i)+"0") != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
> sre.match("\x%02xz" % i, chr(i)+"z") != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
>
> (the above sequence is repeated another 7 times)
>
> --
> Mark
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev




From skip@mojam.com (Skip Montanaro)  Sun Sep  3 05:25:49 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sat, 2 Sep 2000 23:25:49 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000902233647.Q12695@xs4all.nl>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
 <14766.53381.634928.615048@buffalo.fnal.gov>
 <14766.54177.584090.198596@beluga.mojam.com>
 <14767.54430.927663.710733@buffalo.fnal.gov>
 <20000902233647.Q12695@xs4all.nl>
Message-ID: <14769.53966.93066.283106@beluga.mojam.com>

    Thomas> In one test I did, it crashed in int_print, the print function
    Thomas> for int objects, which did 'fprintf(fp, "%ld", v->ival);'.  The
    Thomas> actual SEGV arrived inside fprintf's internals. v->ival was a
    Thomas> valid integer (though a high one) and the problem was not
    Thomas> derefrencing 'v'. 'fp' was stderr, according to its _fileno
    Thomas> member.

I get something similar.  The script conks out after 4491 calls (this with a
threaded interpreter).  It segfaults in _IO_vfprintf trying to print 4492 to
stdout.  All arguments to _IO_vfprintf appear valid (though I'm not quite
sure how to print the third, va_list, argument).

When I configure --without-threads, the script runs much longer, making it
past 18068.  It conks out in the same spot, however, trying to print 18069.
The fact that it occurs in the same place with and without threads (the
addresses of the two different _IO_vfprintf functions are different, which
implies different stdio libraries are active in the threading and
non-threading versions as Thomas said), suggests to me that the problem may
simply be that in the threading case each thread (even the main thread) is
limited to a much smaller stack.  Perhaps I'm seeing what I'm supposed to
see.  If the two versions were to crap out for different reasons, I doubt
I'd see them failing in the same place.

Skip




From cgw@fnal.gov  Sun Sep  3 06:34:24 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Sun, 3 Sep 2000 00:34:24 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14769.53966.93066.283106@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
 <14766.53381.634928.615048@buffalo.fnal.gov>
 <14766.54177.584090.198596@beluga.mojam.com>
 <14767.54430.927663.710733@buffalo.fnal.gov>
 <20000902233647.Q12695@xs4all.nl>
 <14769.53966.93066.283106@beluga.mojam.com>
Message-ID: <14769.58081.532.747747@buffalo.fnal.gov>

Skip Montanaro writes:

 > When I configure --without-threads, the script runs much longer, making it
 > past 18068.  It conks out in the same spot, however, trying to print 18069.
 > The fact that it occurs in the same place with and without threads (the
 > addresses of the two different _IO_vfprintf functions are different, which
 > implies different stdio libraries are active in the threading and
 > non-threading versions as Thomas said), suggests to me that the problem may
 > simply be that in the threading case each thread (even the main thread) is
 > limited to a much smaller stack.  Perhaps I'm seeing what I'm supposed to
 > see.  If the two versions were to crap out for different reasons, I doubt
 > I'd see them failing in the same place.

Yes, libpthread defines it's own version of _IO_vprintf. 

Try this experiment:  do a "ulimit -a" to see what the stack size
limit is; start your Python process; find it's PID, and before you
start your test, go into another window and run the command
watch -n 0 "grep Stk /proc/<pythonpid>/status"

This will show exactly how much stack Python is using.  Then start the
runaway-recursion test.  If it craps out when the stack usage hits the
rlimit, you are seeing what you are supposed to see.  If it craps out
anytime sooner, there is a real bug of some sort, as I'm 99% sure
there is.


From thomas@xs4all.net  Sun Sep  3 08:44:51 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 3 Sep 2000 09:44:51 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14769.41169.108895.723628@sirius.net.home>; from cgw@alum.mit.edu on Sat, Sep 02, 2000 at 07:52:33PM -0500
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net> <14766.53002.467504.523298@beluga.mojam.com> <14766.53381.634928.615048@buffalo.fnal.gov> <14766.54177.584090.198596@beluga.mojam.com> <14767.54430.927663.710733@buffalo.fnal.gov> <20000902233647.Q12695@xs4all.nl> <14769.41169.108895.723628@sirius.net.home>
Message-ID: <20000903094451.R12695@xs4all.nl>

On Sat, Sep 02, 2000 at 07:52:33PM -0500, Charles G Waldman wrote:

> This looks like more ammo for Guido's theory that there's something 
> wrong with libpthread on linux, and right now I'm elbows-deep in the 
> guts of libpthread trying to find out more.  Fun little project for a
> Saturday night ;-)      

I concur that it's probably not Python-related, even if it's probably
Python-triggered (and possibly Python-induced, because of some setting or
other) -- but I think it would be very nice to work around it! And we have
roughly the same recursion limit for BSDI with a 2Mbyte stack limit, so lets
not adjust that guestimate just yet.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Sun Sep  3 09:25:38 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 04:25:38 -0400
Subject: [Python-Dev] failure in test_sre???
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELCHDAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIELOHDAA.tim_one@email.msn.com>

> [Mark Favas, on new test_sre failures]
> > Is it just me, or is test_sre meant to fail, following the recent
> > changes to _sre.c?

I just checked in a fix for this.  /F also implemented PEP 223, and it had a
surprising consequece for test_sre!  There were three test lines (in a loop,
that's why you got so many failures) of the form:

    test(r"""sre.match("\x%02x" % i, chr(i)) != None""", 1)

Note the

    "\x%02x"

part.  Before PEP 223, that "expanded" to itself:

    "\x%02x"

because the damaged \x escape was ignored.  After PEP223, it raised the

    ValueError: invalid \x escape

you kept seeing.  The fix was merely to change these 3 lines to use, e.g.,

    r"\x%02x"

instead.  Pattern strings should usually be r-strings anyway.




From Vladimir.Marangozov@inrialpes.fr  Sun Sep  3 10:21:42 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sun, 3 Sep 2000 11:21:42 +0200 (CEST)
Subject: [Python-Dev] Copyright gag
Message-ID: <200009030921.LAA08963@python.inrialpes.fr>

Even CVS got confused about the Python's copyright <wink>

~> cvs update
...
cvs server: Updating Demo/zlib
cvs server: Updating Doc
cvs server: nothing known about Doc/COPYRIGHT
cvs server: Updating Doc/api
cvs server: Updating Doc/dist
...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Fredrik Lundh" <effbot@telia.com  Sun Sep  3 11:10:01 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sun, 3 Sep 2000 12:10:01 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src LICENSE,1.1.2.7,1.1.2.8
References: <200009030228.TAA12677@slayer.i.sourceforge.net>
Message-ID: <00a501c0158f$25a5bfa0$766940d5@hagrid>

guido wrote:
> Modified Files:
>       Tag: cnri-16-start
> LICENSE 
> Log Message:
> Set a release date, now that there's agreement between
> CNRI and the FSF.

and then he wrote:

> Modified Files:
> LICENSE 
> Log Message:
> Various edits.  Most importantly, added dual licensing.  Also some
> changes suggested by BobW.

where "dual licensing" means:

    ! 3. Instead of using this License, you can redistribute and/or modify
    ! the Software under the terms of the GNU General Public License as
    ! published by the Free Software Foundation; either version 2, or (at
    ! your option) any later version.  For a copy of the GPL, see
    ! http://www.gnu.org/copyleft/gpl.html.
  
what's going on here?  what exactly does the "agreement" mean?

(I can guess, but my guess doesn't make me happy. I didn't really
think I would end up in a situation where people can take code I've
written, make minor modifications to it, and re-release it in source
form in a way that makes it impossible for me to use it...)

</F>



From license-py20@beopen.com  Sun Sep  3 15:03:46 2000
From: license-py20@beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 09:03:46 -0500
Subject: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: Your message of "Sun, 03 Sep 2000 12:09:12 +0200."
 <00a401c0158f$24dc5520$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com>
 <00a401c0158f$24dc5520$766940d5@hagrid>
Message-ID: <200009031403.JAA11856@cj20424-a.reston1.va.home.com>

> bob weiner wrote:    
> > We are doing a lot of work at BeOpen with CNRI to get them to allow
> > the GPL as an alternative license across the CNRI-derived parts of the
> > codebase.  /.../  We at BeOpen want GPL-compatibility and have pushed
> > for that since we started with any Python licensing issues.

Fredrik Lundh replied:
> my understanding was that the consortium members agreed
> that GPL-compatibility was important, but that it didn't mean
> that a licensing Python under GPL was a good thing.
> 
> was dual licensing discussed on the consortium meeting?

Can't remember, probably was mentioned as one of the considered
options.  Certainly the consortium members present at the meeting in
Monterey agreed that GPL compatibility was important.

> is the consortium (and this mailing list) irrelevant in this
> discussion?

You posted a +0 for dual licensing if it was the *only* possibility to
reach GPL-compatibility for future Python licenses.  That's also my
own stance on this.

I don't believe I received any other relevant feedback.  I did see
several posts from consortium members Paul Everitt and Jim Ahlstrom,
defending the choice of law clause in the CNRI license and explaining
why the GPL is not a gret license and why a pure GPL license is
unacceptable for Python; I take these very seriously.

Bob Weiner and I talked for hours with Kahn on Friday night and
Saturday; I talked to Stallman several times on Saturday; Kahn and
Stallman talked on Saturday.  Dual licensing really was the *only* way
to reach an agreement.  So I saw no way out of the impasse except to
just do it and get it over with.

Kahn insisted that 1.6final be released before 2.0b1 and 2.0b1 be made
a derived work of 1.6final.  To show that he was serious, he shut off
our login access to python.org and threatened with legal action if we
would proceed with the 2.0b1 release as a derived work of 1.6b1.  I
don't understand why this is so important to him, but it clearly is.
I want 2.0b1 to be released (don't you?) so I put an extra effort in
to round up Stallman and make sure he and Kahn got on the phone to get
a resolution, and for a blissful few hours I believed it was all done.

Unfortunately the fat lady hasn't sung yet.

After we thought we had reached agreement, Stallman realized that
there are two interpretations of what will happen next:

    1. BeOpen releases a version for which the license is, purely and
    simply, the GPL.

    2. BeOpen releases a version which states the GPL as the license,
    and also states the CNRI license as applying with its text to part
    of the code.

His understanding of the agreement (and that of his attorney, Eben
Moglen, a law professor at NYU) was based on #1.  It appears that what
CNRI will explicitly allow BeOpen (and what the 1.6 license already
allows) is #2.  Stallman will have to get Moglen's opinion, which may
take weeks.  It's possible that they think that the BeOpen license is
still incompatible with the GPL.  In that case (assuming it happens
within a reasonable time frame, and not e.g. 5 years from now :-) we
have Kahn's agreement to go back to the negotiation table and talk to
Stallman about possible modifications to the CNRI license.  If the
license changes, we'll re-release Python 1.6 as 1.6.1 with the new
license, and we'll use that for BeOpen releases.  If dual-licensing is
no longer needed at that point I'm for taking it out again.

> > > BTW, anybody got a word from RMS on whether the "choice of law"
> > > is really the only one bugging him ?
> >
> > Yes, he has told me that was the only remaining issue.
> 
> what's the current status here?  Guido just checked in a new
> 2.0 license that doesn't match the text he posted here a few
> days ago.  Most notable, the new license says:
> 
>     3. Instead of using this License, you can redistribute and/or modify
>     the Software under the terms of the GNU General Public License as
>     published by the Free Software Foundation; either version 2, or (at
>     your option) any later version.  For a copy of the GPL, see
>     http://www.gnu.org/copyleft/gpl.html.
> 
> on the other hand, another checkin message mentions agreement
> between CNRI and the FSF.  did they agree to disagree?

I think I've explained most of this above.  I don't recall that
checkin message.  Which file?  I checked the cvs logs for README and
LICENSE for both the 1.6 and 2.0 branch.

Anyway, the status is that 1.6 final is incompatible with the GPL and
that for 2.0b1 we may or may not have GPL compatibility based on the
dual licensing clause.

I'm not too happy with the final wart.  We could do the following:
take the dual licensing clause out of 2.0b1, and promise to put it
back into 2.0final if it is still needed.  After all, it's only a
beta, and we don't *want* Debian to put 2.0b1 in their distribution,
do we?  But personally I'm of an optimistic nature; I still hope that
Moglen will find this solution acceptable and that this will be the
end of the story.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Fredrik Lundh" <effbot@telia.com  Sun Sep  3 14:36:52 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sun, 3 Sep 2000 15:36:52 +0200
Subject: [Python-Dev] Re: Conflict with the GPL
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com>              <00a401c0158f$24dc5520$766940d5@hagrid>  <200009031403.JAA11856@cj20424-a.reston1.va.home.com>
Message-ID: <005a01c015ac$079f1c00$766940d5@hagrid>

guido wrote:

> I want 2.0b1 to be released (don't you?) so I put an extra effort in
> to round up Stallman and make sure he and Kahn got on the phone to get
> a resolution, and for a blissful few hours I believed it was all done.

well, after reading the rest of your mail, I'm not so
sure...

> After we thought we had reached agreement, Stallman realized that
> there are two interpretations of what will happen next:
> 
>     1. BeOpen releases a version for which the license is, purely and
>     simply, the GPL.
> 
>     2. BeOpen releases a version which states the GPL as the license,
>     and also states the CNRI license as applying with its text to part
>     of the code.

"to part of the code"?

are you saying the 1.6 will be the last version that is
truly free for commercial use???

what parts would be GPL-only?

</F>



From guido@beopen.com  Sun Sep  3 15:35:31 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 09:35:31 -0500
Subject: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: Your message of "Sun, 03 Sep 2000 15:36:52 +0200."
 <005a01c015ac$079f1c00$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com> <00a401c0158f$24dc5520$766940d5@hagrid> <200009031403.JAA11856@cj20424-a.reston1.va.home.com>
 <005a01c015ac$079f1c00$766940d5@hagrid>
Message-ID: <200009031435.JAA12281@cj20424-a.reston1.va.home.com>

> guido wrote:
> 
> > I want 2.0b1 to be released (don't you?) so I put an extra effort in
> > to round up Stallman and make sure he and Kahn got on the phone to get
> > a resolution, and for a blissful few hours I believed it was all done.
> 
> well, after reading the rest of your mail, I'm not so
> sure...

Agreed. :-(

> > After we thought we had reached agreement, Stallman realized that
> > there are two interpretations of what will happen next:
> > 
> >     1. BeOpen releases a version for which the license is, purely and
> >     simply, the GPL.
> > 
> >     2. BeOpen releases a version which states the GPL as the license,
> >     and also states the CNRI license as applying with its text to part
> >     of the code.
> 
> "to part of the code"?
> 
> are you saying the 1.6 will be the last version that is
> truly free for commercial use???
> 
> what parts would be GPL-only?

Aaaaargh!  Please don't misunderstand me!  No part of Python will be
GPL-only!  At best we'll dual license.

This was quoted directly from Stallman's mail about this issue.  *He*
doesn't care about the other half of the dual license, so he doens't
mention it.

Sorry!!!!!!!!!!!!!!!!!!!!!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Sun Sep  3 16:18:07 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 10:18:07 -0500
Subject: [Python-Dev] New commands to display license, credits, copyright info
Message-ID: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>

The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
with an "All Rights Reserved" for each -- according to CNRI's wishes).

This will cause a lot of scrolling at the start of a session.

Does anyone care?

Bob Weiner (my boss at BeOpen) suggested that we could add commands
to display such information instead.  Here's a typical suggestion with
his idea implemented:

    Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "license" or "credits" for this information.
    >>> copyright
    Copyright (c) 2000 BeOpen.com; All Rights Reserved.
    Copyright (c) 1995-2000 Corporation for National Research Initiatives;
    All Rights Reserved.
    Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
    All Rights Reserved.

    >>> credits
    A BeOpen PythonLabs-led production.

    >>> license
    HISTORY OF THE SOFTWARE
    =======================

    Python was created in the early 1990s by Guido van Rossum at Stichting
    Mathematisch Centrum (CWI) in the Netherlands as a successor of a
    language called ABC.  Guido is Python's principal author, although it
        .
        .(etc)
        .
    Hit Return for more, or q (and Return) to quit: q

    >>>

How would people like this?  (The blank line before the prompt is
unavoidable due to the mechanics of how objects are printed.)

Any suggestions for what should go in the "credits" command?

(I considered taking the detailed (messy!) GCC version info out as
well, but decided against it.  There's a bit of a tradition in bug
reports to quote the interpreter header and showing the bug in a
sample session; the compiler version is often relevant.  Expecting
that bug reporters will include this information manually won't work.
Instead, I broke it up in two lines.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From cgw@alum.mit.edu  Sun Sep  3 16:53:08 2000
From: cgw@alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 10:53:08 -0500 (CDT)
Subject: [Python-Dev] New commands to display licence, credits, copyright info
Message-ID: <14770.29668.639079.511087@sirius.net.home>

I like Bob W's suggestion a lot.  It is more open-ended and scalable
than just continuing to add more and more lines to the startup
messages.  I assume these commands would only be in effect in
interactive mode, right?

You could also maybe add a "help" command, which, if nothing else,
could get people pointed at the online tutorial/manuals.

And, by all means, please keep the compiler version in the startup
message!


From guido@beopen.com  Sun Sep  3 17:59:55 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 11:59:55 -0500
Subject: [Python-Dev] New commands to display licence, credits, copyright info
In-Reply-To: Your message of "Sun, 03 Sep 2000 10:53:08 EST."
 <14770.29668.639079.511087@sirius.net.home>
References: <14770.29668.639079.511087@sirius.net.home>
Message-ID: <200009031659.LAA14864@cj20424-a.reston1.va.home.com>

> I like Bob W's suggestion a lot.  It is more open-ended and scalable
> than just continuing to add more and more lines to the startup
> messages.  I assume these commands would only be in effect in
> interactive mode, right?

Actually, for the benefit of tools like IDLE (which have an
interactive read-eval-print loop but don't appear to be interactive
during initialization), they are always added.  They are implemented
as funny builtins, whose repr() prints the info and then returns "".

> You could also maybe add a "help" command, which, if nothing else,
> could get people pointed at the online tutorial/manuals.

Sure -- and "doc".  Later, after 2.0b1.

> And, by all means, please keep the compiler version in the startup
> message!

Will do.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From cgw@alum.mit.edu  Sun Sep  3 17:02:09 2000
From: cgw@alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 11:02:09 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix, etc
Message-ID: <14770.30209.733300.519614@sirius.net.home>

Skip Montanaro write:

> When I configure --without-threads, the script runs much longer,
> making it past 18068.  It conks out in the same spot, however,
> trying to print 18069.

I am utterly unable to reproduce this.  With "ulimit -s unlimited" and
a no-threads version of Python, "find_recursionlimit" ran overnight on
my system and got up to a recursion depth of 98,400 before I killed it
off.  It was using 74MB of stack space at this point, and my system
was running *really* slow (probably because my pathetic little home
system only has 64MB of physical memory!).

Are you absolutely sure that when you built your non-threaded Python
you did a thorough housecleaning, eg. "make clobber"?  Sometimes I get
paranoid and type "make distclean", just to be sure - but this
shouldn't be necessary, right?

Can you give me more info about your system?  I'm at kernel 2.2.16,
gcc 2.95.2 and glibc-2.1.3.  How about you?

I've got to know what's going on here, because your experimental
results don't conform to my theory, and I'd rather change your results
than have to change my theory <wink>

     quizzically yr's,

		  -C






From tim_one@email.msn.com  Sun Sep  3 18:17:34 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 13:17:34 -0400
Subject: [License-py20] Re: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: <005a01c015ac$079f1c00$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMJHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> are you saying the 1.6 will be the last version that is
> truly free for commercial use???

If this is a serious question, it disturbs me, because it would demonstrate
a massive meltdown in trust between the community and BeOpen PythonLabs.

If we were willing to screw *any* of Python's

   + Commercial users.
   + Open Source users.
   + GPL users.

we would have given up a month ago (when we first tried to release 2b1 with
a BSD-style license but got blocked).  Unfortunately, the only power we have
in this now is the power to withhold release until the other parties (CNRI
and FSF) agree on a license they can live with too.  If the community thinks
Guido would sell out Python's commercial users to get the FSF's blessing,
*or vice versa*, maybe we should just give up on the basis that we've lost
peoples' trust anyway.  Delaying the releases time after time sure isn't
helping BeOpen's bottom line.




From tim_one@email.msn.com  Sun Sep  3 18:43:15 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 13:43:15 -0400
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMKHDAA.tim_one@email.msn.com>

[Guido]
> The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
> with an "All Rights Reserved" for each -- according to CNRI's wishes).
>
> This will cause a lot of scrolling at the start of a session.
>
> Does anyone care?

I personally hate it:

C:\Code\python\dist\src\PCbuild>python
Python 2.0b1 (#0, Sep  3 2000, 00:31:47) [MSC 32 bit (Intel)] on win32
Copyright (c) 2000 BeOpen.com; All Rights Reserved.
Copyright (c) 1995-2000 Corporation for National Research Initiatives;
All Rights Reserved.
Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
All Rights Reserved.
>>>

Besides being plain ugly, under Win9x DOS boxes are limited to a max height
of 50 lines, and that's also the max buffer size.  This mass of useless
verbiage (I'm still a programmer 20 minutes of each day <0.7 wink>) has
already interfered with my ability to test the Windows version of Python
(half the old build's stuff I wanted to compare the new build's behavior
with scrolled off the screen the instant I started the new build!).

> Bob Weiner (my boss at BeOpen) suggested that we could add commands
> to display such information instead.  Here's a typical suggestion with
> his idea implemented:
>
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03)
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
>     >>> ...

Much better.

+1.




From tim_one@email.msn.com  Sun Sep  3 20:59:36 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 15:59:36 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src LICENSE,1.1.2.7,1.1.2.8
In-Reply-To: <00a501c0158f$25a5bfa0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMPHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> I didn't really think I would end up in a situation where people
> can take code I've written, make minor modifications to it, and re-
> release it in source form in a way that makes it impossible for me
> to use it...)

People have *always* been able to do that, /F.  The CWI license was
GPL-compatible (according to RMS), so anyone all along has been able to take
the Python distribution in whole or in part and re-release it under the
GPL -- or even more restrictive licenses than that.  Heck, they don't even
have to reveal their modifications to your code if they don't feel like it
(although they would have to under the GPL).

So there's nothing new here.  In practice, I don't think anyone yet has felt
abused (well, not by *this* <wink>).




From tim_one@email.msn.com  Sun Sep  3 21:22:43 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 16:22:43 -0400
Subject: [Python-Dev] Copyright gag
In-Reply-To: <200009030921.LAA08963@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENBHDAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> Sent: Sunday, September 03, 2000 5:22 AM
> To: Python core developers
> Subject: [Python-Dev] Copyright gag
>
> Even CVS got confused about the Python's copyright <wink>
>
> ~> cvs update
> ...
> cvs server: Updating Demo/zlib
> cvs server: Updating Doc
> cvs server: nothing known about Doc/COPYRIGHT
> cvs server: Updating Doc/api
> cvs server: Updating Doc/dist
> ...

Yes, we're all seeing that.  I filed a bug report on it with SourceForge; no
resolution yet; we can't get at the CVS files directly (for "security
reasons"), so they'll have to find the damage & fix it themselves.





From trentm@ActiveState.com  Sun Sep  3 22:10:43 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sun, 3 Sep 2000 14:10:43 -0700
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Sep 03, 2000 at 10:18:07AM -0500
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <20000903141043.B28584@ActiveState.com>

On Sun, Sep 03, 2000 at 10:18:07AM -0500, Guido van Rossum wrote:
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2

Yes, I like getting rid of the copyright verbosity.

>     Type "copyright", "license" or "credits" for this information.
>     >>> copyright
>     >>> credits
>     >>> license
>     >>>

... but do we need these. Canwe not just add a -V or --version or
--copyright, etc switches. Not a big deal, though.


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From nascheme@enme.ucalgary.ca  Mon Sep  4 00:28:04 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Sun, 3 Sep 2000 17:28:04 -0600
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Sun, Sep 03, 2000 at 10:18:07AM -0500
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <20000903172804.A20336@keymaster.enme.ucalgary.ca>

On Sun, Sep 03, 2000 at 10:18:07AM -0500, Guido van Rossum wrote:
> Does anyone care?

Yes.  Athough not too much.

> Bob Weiner (my boss at BeOpen) suggested that we could add commands
> to display such information instead.

Much nicer except for one nit.

>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
                                                   ^^^^

For what information?

  Neil


From jeremy@beopen.com  Mon Sep  4 00:59:12 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Sun, 3 Sep 2000 19:59:12 -0400 (EDT)
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <20000903172804.A20336@keymaster.enme.ucalgary.ca>
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
 <20000903172804.A20336@keymaster.enme.ucalgary.ca>
Message-ID: <14770.58832.801784.267646@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme@enme.ucalgary.ca> writes:

  >> Python 2.0b1 (#134, Sep 3 2000, 10:04:03) 
  >> [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 
  >> Type "copyright", "license" or "credits" for this information.
  NS>                                             ^^^^
  NS> For what information?

I think this is a one-line version of 'Type "copyright" for copyright
information, "license" for license information, or "credits" for
credits information.'

I think the meaning is clear if the phrasing is awkward.  Would 'that'
be any better than 'this'?

Jeremy


From root@buffalo.fnal.gov  Mon Sep  4 01:00:00 2000
From: root@buffalo.fnal.gov (root)
Date: Sun, 3 Sep 2000 19:00:00 -0500
Subject: [Python-Dev] New commands to display license, credits, copyright info
Message-ID: <200009040000.TAA19857@buffalo.fnal.gov>

Jeremy wrote:

 > I think the meaning is clear if the phrasing is awkward.  Would 'that'
 > be any better than 'this'?

To my ears, "that" is just as awkward as "this".  But in this context,
I think "more" gets the point across and sounds much more natural.



From Vladimir.Marangozov@inrialpes.fr  Mon Sep  4 01:07:03 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 02:07:03 +0200 (CEST)
Subject: [Python-Dev] libdb on by default, but no db.h
Message-ID: <200009040007.CAA14488@python.inrialpes.fr>

On my AIX combo, configure assumes --with-libdb (yes) but reports that

...
checking for db_185.h... no
checking for db.h... no
...

This leaves the bsddbmodule enabled but it can't compile, obviously.
So this needs to be fixed ASAP.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Mon Sep  4 02:16:20 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 03:16:20 +0200 (CEST)
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 03, 2000 10:18:07 AM
Message-ID: <200009040116.DAA14774@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
> with an "All Rights Reserved" for each -- according to CNRI's wishes).
> 
> This will cause a lot of scrolling at the start of a session.
> 
> Does anyone care?

Not much, but this is annoying information anyway :-)

> 
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
>     >>> copyright
>     Copyright (c) 2000 BeOpen.com; All Rights Reserved.
>     Copyright (c) 1995-2000 Corporation for National Research Initiatives;
>     All Rights Reserved.
>     Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
>     All Rights Reserved.

A semicolon before "All rights reserved" is ugly. IMO, it should be a period.
All rights reserved probably needs to go to a new line for the three
copyright holders. Additionally, they can be seperated by a blank line
for readability.

Otherwise, I like the proposed "type ... for more information".

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From skip@mojam.com (Skip Montanaro)  Mon Sep  4 02:10:26 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sun, 3 Sep 2000 20:10:26 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix, etc
In-Reply-To: <14770.30209.733300.519614@sirius.net.home>
References: <14770.30209.733300.519614@sirius.net.home>
Message-ID: <14770.63106.529258.156519@beluga.mojam.com>

    Charles> I am utterly unable to reproduce this.  With "ulimit -s
    Charles> unlimited" and a no-threads version of Python,
    Charles> "find_recursionlimit" ran overnight on my system and got up to
    Charles> a recursion depth of 98,400 before I killed it off.

Mea culpa.  It seems I forgot the "ulimit -s unlimited" command.  Keep your
theory, but get a little more memory.  It only took me a few seconds to
exceed a recursion depth of 100,000 after properly setting the stack size
limit... ;-)

Skip





From cgw@alum.mit.edu  Mon Sep  4 03:33:24 2000
From: cgw@alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 21:33:24 -0500
Subject: [Python-Dev] Thread problems on Linux
Message-ID: <200009040233.VAA27866@sirius>

No, I still don't have the answer, but I came across a very interesting
bit in the `info' files for glibc-2.1.3.  Under a heading "Specific Advice
for Linux Systems", along with a bunch of info about installing glibc,
is this gem:

 >    You cannot use `nscd' with 2.0 kernels, due to bugs in the
 > kernel-side thread support.  `nscd' happens to hit these bugs
 > particularly hard, but you might have problems with any threaded
 > program.

Now, they are talking about 2.0 and I assume everyone here running Linux
is running 2.2.  However it makes one wonder whether all the bugs in
kernel-side thread support are really fixed in 2.2.  One of these days
we'll figure it out...



From tim_one@email.msn.com  Mon Sep  4 03:44:28 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 22:44:28 -0400
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <200009040233.VAA27866@sirius>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>

Did we ever get a little "pure C" program that illustrates the mystery here?
That's probably still the only way to get a Linux guru interested, and also
the best way to know whether the problem is fixed in a future release (i.e.,
by running the sucker and seeing whether it still misbehaves).

I could believe, e.g., that they fixed pthread locks fine, but that there's
still a subtle problem with pthread condition vrbls.  To the extent Jeremy's
stacktraces made any sense, they showed insane condvar symptoms (a parent
doing a pthread_cond_wait yet chewing cycles at a furious pace).




From tim_one@email.msn.com  Mon Sep  4 04:11:09 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 23:11:09 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <007901c014c0$852eff60$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEOCHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> just fyi, Tkinter seems to be extremely unstable on Win95 and
> Win98FE (when shut down, the python process grabs the key-
> board and hangs.  the only way to kill the process is to reboot)
>
> the same version of Tk (wish) works just fine...

So what can we do about this?  I'm wary about two things:

1. Thomas reported one instance of Win98FE rot, of a kind that simply
   plagues Windows for any number of reasons.  He wasn't able to
   reproduce it.  So while I've noted his report, I'm giving it little
   weight so far.

2. I never use Tkinter, except indirectly for IDLE.  I've been in and
   out of 2b1 IDLE on Win98SE all day and haven't seen a hint of trouble.

   But you're a Tkinter power user of the highest order.  So one thing
   I'm wary of is that you may have magical Tcl/Tk envars (or God only
   knows what else) set up to deal with the multiple copies of Tcl/Tk
   I'm betting you have on your machine.  In fact, I *know* you have
   multiple Tcl/Tks sitting around because of your wish comment:
   the Python installer no longer installs wish, so you got that from
   somewhere else.  Are you positive you're not mixing versions
   somehow?  If anyone could mix them in a way we can't stop, it's
   you <wink>.

If anyone else is having Tkinter problems, they haven't reported them.
Although I doubt few have tried it!

In the absence of more helpers, can you pass on a specific (small if
possible) program that exhibits the "hang" problem?  And by "extremely
unstable", do you mean that there are many strange problems, or is the "hang
on exit" problem the only one?

Thanks in advance!

beleagueredly y'rs  - tim




From skip@mojam.com (Skip Montanaro)  Mon Sep  4 04:12:06 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sun, 3 Sep 2000 22:12:06 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040007.CAA14488@python.inrialpes.fr>
References: <200009040007.CAA14488@python.inrialpes.fr>
Message-ID: <14771.4870.954882.513141@beluga.mojam.com>

--+eN/JChl9G
Content-Type: text/plain; charset=us-ascii
Content-Description: message body text
Content-Transfer-Encoding: 7bit


    Vlad> On my AIX combo, configure assumes --with-libdb (yes) but reports
    Vlad> that

    Vlad> ...
    Vlad> checking for db_185.h... no
    Vlad> checking for db.h... no
    Vlad> ...

    Vlad> This leaves the bsddbmodule enabled but it can't compile,
    Vlad> obviously.  So this needs to be fixed ASAP.

Oops.  Please try the attached patch and let me know it it runs better.
(Don't forget to run autoconf.)  Besides fixing the problem you
reported, it tells users why bsddb was not supported if they asked for it
but it was not enabled.

Skip


--+eN/JChl9G
Content-Type: text/plain
Content-Description: better bsddb detection
Content-Disposition: inline;
	filename="configure.in.patch"
Content-Transfer-Encoding: 7bit

Index: configure.in
===================================================================
RCS file: /cvsroot/python/python/dist/src/configure.in,v
retrieving revision 1.154
diff -c -c -r1.154 configure.in
*** configure.in	2000/08/31 17:45:35	1.154
--- configure.in	2000/09/04 03:10:01
***************
*** 813,826 ****
  AC_ARG_WITH(libdb,
  [  --with(out)-libdb               disable/enable bsddb module])
  
! # default is enabled
! if test -z "$with_libdb"
! then with_libdb="yes"
  fi
! # if we found db.h, enable, unless with_libdb is expressly set to "no"
! if test "$ac_cv_header_db_h" = "yes" -a "$with_libdb" != "no"
! then with_libdb="yes"
! fi
  if test "$with_libdb" = "no"
  then
      USE_BSDDB_MODULE="#"
--- 813,833 ----
  AC_ARG_WITH(libdb,
  [  --with(out)-libdb               disable/enable bsddb module])
  
! # enabled by default, but db.h must be found
! if test "$ac_cv_header_db_h" = "yes"
! then
!     if test "$with_libdb" != "no"
!     then with_libdb="yes"
!     fi
! else
!     # make sure user knows why bsddb support wasn't enabled event
!     # though s/he requested it
!     if test "$with_libdb" = "yes"
!     then echo $ac_n "(requested, but db.h was not found) $ac_c"
!     fi
!     with_libdb="no"
  fi
! 
  if test "$with_libdb" = "no"
  then
      USE_BSDDB_MODULE="#"

--+eN/JChl9G--


From greg@cosc.canterbury.ac.nz  Mon Sep  4 04:21:14 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 04 Sep 2000 15:21:14 +1200 (NZST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021407.QAA29710@python.inrialpes.fr>
Message-ID: <200009040321.PAA18947@s454.cosc.canterbury.ac.nz>

Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov):

> The point is that we have two types of garbage: collectable and
> uncollectable.

I don't think these are the right terms. The collector can
collect the "uncollectable" garbage all right -- what it can't
do is *dispose* of it. So it should be called "undisposable"
or "unrecyclable" or "undigestable" something.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From Vladimir.Marangozov@inrialpes.fr  Mon Sep  4 04:51:31 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 05:51:31 +0200 (CEST)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <14771.4870.954882.513141@beluga.mojam.com> from "Skip Montanaro" at Sep 03, 2000 10:12:06 PM
Message-ID: <200009040351.FAA19784@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> Oops.  Please try the attached patch and let me know it it runs better.

Runs fine. Thanks!

After looking again at Modules/Setup.config, I wonder whether it would
be handy to add a configure option --with-shared (or similar) which would
uncomment #*shared* there and in Setup automatically (in line with the
other recent niceties like --with-pydebug).

Uncommenting them manually in two files now is a pain... :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From skip@mojam.com (Skip Montanaro)  Mon Sep  4 05:06:40 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sun, 3 Sep 2000 23:06:40 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040351.FAA19784@python.inrialpes.fr>
References: <14771.4870.954882.513141@beluga.mojam.com>
 <200009040351.FAA19784@python.inrialpes.fr>
Message-ID: <14771.8144.959081.410574@beluga.mojam.com>

    Vlad> After looking again at Modules/Setup.config, I wonder whether it
    Vlad> would be handy to add a configure option --with-shared (or
    Vlad> similar) which would uncomment #*shared* there and in Setup
    Vlad> automatically (in line with the other recent niceties like
    Vlad> --with-pydebug).

    Vlad> Uncommenting them manually in two files now is a pain... :-)

Agreed.  I'll submit a patch.

Skip


From skip@mojam.com (Skip Montanaro)  Mon Sep  4 05:16:52 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sun, 3 Sep 2000 23:16:52 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040351.FAA19784@python.inrialpes.fr>
References: <14771.4870.954882.513141@beluga.mojam.com>
 <200009040351.FAA19784@python.inrialpes.fr>
Message-ID: <14771.8756.760841.38442@beluga.mojam.com>

    Vlad> After looking again at Modules/Setup.config, I wonder whether it
    Vlad> would be handy to add a configure option --with-shared (or
    Vlad> similar) which would uncomment #*shared* there and in Setup
    Vlad> automatically (in line with the other recent niceties like
    Vlad> --with-pydebug).

On second thought, I think this is not a good idea right now because
Modules/Setup is not usually fiddled by the configure step.  If "#*shared*"
existed in Modules/Setup and the user executed "./configure --with-shared",
they'd be disappointed that the modules declared in Modules/Setup following
that line weren't built as shared objects.

Skip



From greg@cosc.canterbury.ac.nz  Mon Sep  4 05:34:02 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 04 Sep 2000 16:34:02 +1200 (NZST)
Subject: [Python-Dev] New commands to display license, credits,
 copyright info
In-Reply-To: <14770.58832.801784.267646@bitdiddle.concentric.net>
Message-ID: <200009040434.QAA18957@s454.cosc.canterbury.ac.nz>

Jeremy Hylton <jeremy@beopen.com>:

> I think the meaning is clear if the phrasing is awkward.  Would 'that'
> be any better than 'this'?

How about "for more information"?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Mon Sep  4 09:08:27 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 4 Sep 2000 04:08:27 -0400
Subject: [Python-Dev] ME so mmap
In-Reply-To: <DOEGJPEHJOJKDFNLNCHIKEDJCAAA.audun@mindspring.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEOLHDAA.tim_one@email.msn.com>

Audun S. Runde mailto:audun@mindspring.com wins a Fabulous Prize for being
our first Windows ME tester!  Also our only, and I think he should get
another prize just for that.

The good news is that the creaky old Wise installer worked.  The bad news is
that we've got a Windows-ME-specific std test failure, in test_mmap.

This is from the installer available via anonymous FTP from
python.beopen.com,

     /pub/windows/beopen-python2b1p2-20000901.exe
     5,783,115 bytes

and here's the meat of the bad news in Audun's report:

> PLATFORM 2.
> Windows ME
> (version/build 4.90.3000 aka. "Techical Beta Special Edition"
> -- claimed to be identical to the shipping version),
> no previous Python install
> =============================================================
>
> + Try
>     python lib/test/regrtest.py
>
> --> results:
> 76 tests OK.
> 1 test failed: test_mmap (see below)
> 23 tests skipped (al, cd, cl, crypt, dbm, dl, fcntl, fork1, gdbm, gl, grp,
> imgfile, largefile, linuxaudiodev, minidom, nis, openpty, poll, pty, pwd,
> signal, sunaudiodev, timing)
>
> Rerun of test_mmap.py:
> ----------------------
> C:\Python20\Lib\test>..\..\python test_mmap.py
> Traceback (most recent call last):
>   File "test_mmap.py", line 121, in ?
>     test_both()
>   File "test_mmap.py", line 18, in test_both
>     m = mmap.mmap(f.fileno(), 2 * PAGESIZE)
> WindowsError: [Errno 6] The handle is invalid
>
> C:\Python20\Lib\test>
>
>
> --> Please let me know if there is anything I can do to help with
> --> this -- but I might need detailed instructions ;-)

So we're not even getting off the ground with mmap on ME -- it's dying in
the mmap constructor.  I'm sending this to Mark Hammond directly because he
was foolish enough <wink> to fix many mmap-on-Windows problems, but if any
other developer has access to ME feel free to grab this joy away from him.
There are no reports of test_mmap failing on any other flavor of Windows (&
clean reports from 95, 2000, NT, 98), looks extremely unlikely that it's a
flaw in the installer, and it's a gross problem right at the start.

Best guess now is that it's a bug in ME.  What?  A bug in a new flavor of
Windows?!  Na, couldn't be ...

may-as-well-believe-that-money-doesn't-grow-on-trees-ly y'rs  - tim




From tim_one@email.msn.com  Mon Sep  4 09:49:12 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 4 Sep 2000 04:49:12 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <200009021500.RAA00776@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEONHDAA.tim_one@email.msn.com>

[Vladimir Marangozov, heroically responds to pleas for Windows help!]

>     /pub/windows/beopen-python2b1p2-20000901.exe
>     5,783,115 bytes
>
> In case my feedback matters, being a Windows amateur,

That's *good*:  amateurs make better testers because they're less prone to
rationalize away problems or gloss over things they needed to fix by hand.

> the installation went smoothly on my home P100

You're kidding, right?  They give away faster processors in cereal boxes now
<wink>.

> with some early Win95 pre-release.

Brrrrrrr.  Even toxic waste dumps won't accept *those* anymore!

> In the great Windows tradition, I was asked to reboot & did so.

That's interesting -- first report of a reboot I've gotten.  But it makes
sense:  everyone else who has tried this is an eager Windows beta tester or
a Python Windows developer, so all their system files are likely up to date.
Windows only makes you reboot if it has to *replace* a system file with a
newer one from the install (unlike Unix, Windows won't let you "unlink" a
file that's in use; that's why they have to replace popular system files
during the reboot, *before* Windows proper starts up).

> The regression tests passed in console mode.

Frankly, I'm amazed!  Please don't test anymore <0.9 wink>.

> Then launched successfully IDLE. In IDLE I get *beep* sounds every
< time I hit RETURN without typing anything.  I was able to close both
> the console and IDLE without problems.

Assuming you saw Guido's msg about the *beep*s.  If not, it's an IDLE buglet
and you're not alone.  Won't be fixed for 2b1, maybe by 2.0.

> Haven't tried the uninstall link, though.

It will work -- kinda.  It doesn't really uninstall everything on any flavor
of Windows.  I think BeOpen.com should agree to buy me an installer newer
than your Win95 prerelease.

> don't-ask-me-any-questions-about-Windows'ly y'rs

I was *going* to, and I still am.  And your score is going on your Permanent
Record, so don't screw this up!  But since you volunteered such a nice and
helpful test report, I'll give you a relatively easy one:  which company
sells Windows?

A. BeOpen PythonLabs
B. ActiveState
C. ReportLabs
D. Microsoft
E. PythonWare
F. Red Hat
G. General Motors
H. Corporation for National Research Initiatives
I. Free Software Foundation
J. Sun Microsystems
K. National Security Agency

hint:-it's-the-only-one-without-an-"e"-ly y'rs  - tim




From nascheme@enme.ucalgary.ca  Mon Sep  4 15:18:28 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 4 Sep 2000 08:18:28 -0600
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>; from Tim Peters on Sun, Sep 03, 2000 at 10:44:28PM -0400
References: <200009040233.VAA27866@sirius> <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>
Message-ID: <20000904081828.B23753@keymaster.enme.ucalgary.ca>

The pthread model does not map will into the Linux clone model.  The
standard seems to assume that threads are implemented as a process.
Linus is adding some extra features in 2.4 which may help (thread
groups).  We will see if the glibc maintainers can make use of these.

I'm thinking of creating a thread_linux header file.  Do you think that
would be a good idea?  clone() seems to be pretty easy to use although
it is quite low level.

  Neil


From guido@beopen.com  Mon Sep  4 16:40:58 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 04 Sep 2000 10:40:58 -0500
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: Your message of "Mon, 04 Sep 2000 08:18:28 CST."
 <20000904081828.B23753@keymaster.enme.ucalgary.ca>
References: <200009040233.VAA27866@sirius> <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>
 <20000904081828.B23753@keymaster.enme.ucalgary.ca>
Message-ID: <200009041540.KAA23263@cj20424-a.reston1.va.home.com>

> The pthread model does not map will into the Linux clone model.  The
> standard seems to assume that threads are implemented as a process.
> Linus is adding some extra features in 2.4 which may help (thread
> groups).  We will see if the glibc maintainers can make use of these.
> 
> I'm thinking of creating a thread_linux header file.  Do you think that
> would be a good idea?  clone() seems to be pretty easy to use although
> it is quite low level.

This seems nice at first, but probably won't work too well when you
consider embedding Python in applications that use the Posix threads
library.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From cgw@alum.mit.edu  Mon Sep  4 16:02:03 2000
From: cgw@alum.mit.edu (Charles G Waldman)
Date: Mon, 4 Sep 2000 10:02:03 -0500
Subject: [Python-Dev] mail sent as "root"
Message-ID: <200009041502.KAA05864@buffalo.fnal.gov>

sorry for the mail sent as "root" - d'oh.  I still am not able to
send mail from fnal.gov to python.org (no route to host) and am
playing some screwy games to get my mail delivered.



From cgw@alum.mit.edu  Mon Sep  4 16:52:42 2000
From: cgw@alum.mit.edu (Charles G Waldman)
Date: Mon, 4 Sep 2000 10:52:42 -0500
Subject: [Python-Dev] Thread problems on Linux
Message-ID: <200009041552.KAA06048@buffalo.fnal.gov>

Neil wrote:

>I'm thinking of creating a thread_linux header file.  Do you think that 
>would be a good idea?  clone() seems to be pretty easy to use although 
>it is quite low level. 
 
Sounds like a lot of work to me.   The pthread library gets us two
things (essentially) - a function to create threads, which you could
pretty easily replace with clone(), and other functions to handle
mutexes and conditions.  If you replace pthread_create with clone
you have a lot of work to do to implement the locking stuff... Of
course, if you're willing to do this work, then more power to you.
But from my point of view, I'm at a site where we're using pthreads
on Linux in non-Python applications as well, so I'm more interested
in diagnosing and trying to fix (or at least putting together a   
detailed and coherent bug report on) the platform bugs, rather than
trying to work around them in the Python interpreter.




From Vladimir.Marangozov@inrialpes.fr  Mon Sep  4 19:11:33 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 20:11:33 +0200 (CEST)
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEONHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 04, 2000 04:49:12 AM
Message-ID: <200009041811.UAA21177@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov, heroically responds to pleas for Windows help!]
> 
> That's *good*:  amateurs make better testers because they're less prone to
> rationalize away problems or gloss over things they needed to fix by hand.

Thanks. This is indeed the truth.

> 
> > the installation went smoothly on my home P100
> 
> You're kidding, right?  They give away faster processors in cereal boxes now
> <wink>.

No. I'm proud to possess a working Pentium 100 with the F0 0F bug. This
is a genuine snapshot of the advances of a bunch of technologies at the
end of the XX century.

> 
> > with some early Win95 pre-release.
> 
> Brrrrrrr.  Even toxic waste dumps won't accept *those* anymore!

see above.

> 
> > Haven't tried the uninstall link, though.
> 
> It will work -- kinda.  It doesn't really uninstall everything on any flavor
> of Windows.  I think BeOpen.com should agree to buy me an installer newer
> than your Win95 prerelease.

Wasn't brave enough to reboot once again <wink>.

> 
> > don't-ask-me-any-questions-about-Windows'ly y'rs
> 
> I was *going* to, and I still am.

Seriously, if you need more feedback, you'd have to give me click by click
instructions. I'm in trouble each time I want to do any real work within
the Windows clickodrome.

> And your score is going on your Permanent Record, so don't screw this up!
> But since you volunteered such a nice and helpful test report, I'll give
> you a relatively easy one:  which company sells Windows?
> 
> A. BeOpen PythonLabs
> B. ActiveState
> C. ReportLabs
> D. Microsoft
> E. PythonWare
> F. Red Hat
> G. General Motors
> H. Corporation for National Research Initiatives
> I. Free Software Foundation
> J. Sun Microsystems
> K. National Security Agency
> 
> hint:-it's-the-only-one-without-an-"e"-ly y'rs  - tim
> 

Hm. Thanks for the hint! Let's see. It's not "me" for sure. Could
be "you" though <wink>. I wish it was General Motors...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From nascheme@enme.ucalgary.ca  Mon Sep  4 20:28:38 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 4 Sep 2000 13:28:38 -0600
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <200009041504.KAA05892@buffalo.fnal.gov>; from Charles G Waldman on Mon, Sep 04, 2000 at 10:04:40AM -0500
References: <200009041504.KAA05892@buffalo.fnal.gov>
Message-ID: <20000904132838.A25571@keymaster.enme.ucalgary.ca>

On Mon, Sep 04, 2000 at 10:04:40AM -0500, Charles G Waldman wrote:
>If you replace pthread_create with clone you have a lot of work to do
>to implement the locking stuff...

Locks exist in /usr/include/asm.  It is Linux specific but so is
clone().

  Neil


From thomas@xs4all.net  Mon Sep  4 21:14:39 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 4 Sep 2000 22:14:39 +0200
Subject: [Python-Dev] Vacation
Message-ID: <20000904221438.U12695@xs4all.nl>

I'll be offline for two weeks, enjoying a sunny (hopfully!) holiday in
southern Italy. I uploaded the docs I had for augmented assignment; not
terribly much I'm afraid :P We had some trouble at work over the weekend,
which cost me most of the time I thought I had to finish some of this up.

(For the developers among you that, like me, do a bit of sysadmining on the
side: one of our nameservers was hacked, either through password-guessing
(unlikely), sniffing (unlikely), a hole in ssh (1.2.26, possible but
unlikely) or a hole in named (BIND 8.2.2-P5, very unlikely). There was a
copy of the named binary in /tmp under an obscure filename, which leads us
to believe it was the latter -- which scares the shit out of me personally,
as anything before P3 was proven to be insecure, and the entire sane world
and their dog runs P5. Possibly it was 'just' a bug in Linux/RedHat, though.
Cleaning up after scriptkiddies, a great way to spend your weekend before
your vacation, let me tell you! :P)

I'll be back on the 19th, plenty of time left to do beta testing after that
:)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From rob@hooft.net  Tue Sep  5 07:15:04 2000
From: rob@hooft.net (Rob W. W. Hooft)
Date: Tue, 5 Sep 2000 08:15:04 +0200 (CEST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.52,1.53
In-Reply-To: <200009050438.VAA03390@slayer.i.sourceforge.net>
References: <200009050438.VAA03390@slayer.i.sourceforge.net>
Message-ID: <14772.36712.451676.957918@temoleh.chem.uu.nl>

! Augmented Assignment
! --------------------
!
! This must have been the most-requested feature of the past years!
! Eleven new assignment operators were added:
!
!     += -+ *= /= %= **= <<= >>= &= ^= |=

Interesting operator "-+" in there! I won't submit this as patch
to sourceforge....

Index: dist/src/Misc/NEWS
===================================================================
RCS file: /cvsroot/python/python/dist/src/Misc/NEWS,v
retrieving revision 1.53
diff -u -c -r1.53 NEWS
cvs server: conflicting specifications of output style
*** dist/src/Misc/NEWS  2000/09/05 04:38:34     1.53
--- dist/src/Misc/NEWS  2000/09/05 06:14:16
***************
*** 66,72 ****
  This must have been the most-requested feature of the past years!
  Eleven new assignment operators were added:
  
!     += -+ *= /= %= **= <<= >>= &= ^= |=
  
  For example,
  
--- 66,72 ----
  This must have been the most-requested feature of the past years!
  Eleven new assignment operators were added:
  
!     += -= *= /= %= **= <<= >>= &= ^= |=
  
  For example,
  


Regards,

Rob Hooft

-- 
=====   rob@hooft.net          http://www.hooft.net/people/rob/  =====
=====   R&D, Nonius BV, Delft  http://www.nonius.nl/             =====
===== PGPid 0xFA19277D ========================== Use Linux! =========


From bwarsaw@beopen.com  Tue Sep  5 08:23:55 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 5 Sep 2000 03:23:55 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.52,1.53
References: <200009050438.VAA03390@slayer.i.sourceforge.net>
 <14772.36712.451676.957918@temoleh.chem.uu.nl>
Message-ID: <14772.40843.669856.756485@anthem.concentric.net>

>>>>> "RWWH" == Rob W W Hooft <rob@hooft.net> writes:

    RWWH> Interesting operator "-+" in there! I won't submit this as
    RWWH> patch to sourceforge....

It's Python 2.0's way of writing "no op" :)

I've already submitted this internally.  Doubt it will make it into
2.0b1, but we'll get it into 2.0 final.

-Barry


From tdickenson@geminidataloggers.com  Tue Sep  5 12:19:42 2000
From: tdickenson@geminidataloggers.com (Toby Dickenson)
Date: Tue, 05 Sep 2000 12:19:42 +0100
Subject: [Python-Dev] Re: [I18n-sig] ustr
In-Reply-To: <200007071244.HAA03694@cj20424-a.reston1.va.home.com>
References: <r39bmsc6remdupiv869s5agm46m315ebeq@4ax.com>   <3965BBE5.D67DD838@lemburg.com> <200007071244.HAA03694@cj20424-a.reston1.va.home.com>
Message-ID: <vhl9rsclpk9e89oaeehpg7sec79ar8cdru@4ax.com>

On Fri, 07 Jul 2000 07:44:03 -0500, Guido van Rossum
<guido@beopen.com> wrote:

We debated a ustr function in July. Does anyone have this in hand? I
can prepare a patch if necessary.

>> Toby Dickenson wrote:
>> >=20
>> > I'm just nearing the end of getting Zope to play well with unicode
>> > data. Most of the changes involved replacing a call to str, in
>> > situations where either a unicode or narrow string would be
>> > acceptable.
>> >=20
>> > My best alternative is:
>> >=20
>> > def convert_to_something_stringlike(x):
>> >     if type(x)=3D=3Dtype(u''):
>> >         return x
>> >     else:
>> >         return str(x)
>> >=20
>> > This seems like a fundamental operation - would it be worth having
>> > something similar in the standard library?
>
>Marc-Andre Lemburg replied:
>
>> You mean: for Unicode return Unicode and for everything else
>> return strings ?
>>=20
>> It doesn't fit well with the builtins str() and unicode(). I'd
>> say, make this a userland helper.
>
>I think this would be helpful to have in the std library.  Note that
>in JPython, you'd already use str() for this, and in Python 3000 this
>may also be the case.  At some point in the design discussion for the
>current Unicode support we also thought that we wanted str() to do
>this (i.e. allow 8-bit and Unicode string returns), until we realized
>that there were too many places that would be very unhappy if str()
>returned a Unicode string!
>
>The problem is similar to a situation you have with numbers: sometimes
>you want a coercion that converts everything to float except it should
>leave complex numbers complex.  In other words it coerces up to float
>but it never coerces down to float.  Luckily you can write that as
>"x+0.0" while converts int and long to float with the same value while
>leaving complex alone.
>
>For strings there is no compact notation like "+0.0" if you want to
>convert to string or Unicode -- adding "" might work in Perl, but not
>in Python.
>
>I propose ustr(x) with the semantics given by Toby.  Class support (an
>__ustr__ method, with fallbacks on __str__ and __unicode__) would also
>be handy.


Toby Dickenson
tdickenson@geminidataloggers.com


From guido@beopen.com  Tue Sep  5 15:29:44 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 09:29:44 -0500
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
Message-ID: <200009051429.JAA19296@cj20424-a.reston1.va.home.com>

Folks,

After a Labor Day weekend ful excitement, I have good news and bad
news.

The good news is that both Python 1.6 and Python 2.0b1 will be
released today (in *some* US timezone :-).  The former from
python.org, the latter from pythonlabs.com.

The bad news is that there's still no agreement from Stallman that the
CNRI open source license is GPL-compatible.  See my previous post
here.  (Re: Conflict with the GPL.)  Given that we still don't know
that dual licensing will be necessary and sufficient to make the 2.0
license GPL-compatible, we decided not to go for dual licensing just
yet -- if it transpires later that it is necessary, we'll add it to
the 2.0 final license.

At this point, our best shot seems to be to arrange a meeting between
CNRI's lawyer and Stallman's lawyer.  Without the lawyers there, we
never seem to be able to get a commitment to an agreement.  CNRI is
willing to do this; Stallman's lawyer (Eben Moglen; he's a law
professor at Columbia U, not NYU as I previously mentioned) is even
harder to get a hold of than Stallman himself, so it may be a while.
Given CNRI's repeatedly expressed commitment to move this forward, I
don't want to hold up any of the releases that were planned for today
any longer.

So look forward to announcements later today, and get out the
(qualified) champagne...!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Sep  5 15:30:32 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 09:30:32 -0500
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
Message-ID: <200009051430.JAA19323@cj20424-a.reston1.va.home.com>

Folks,

After a Labor Day weekend ful excitement, I have good news and bad
news.

The good news is that both Python 1.6 and Python 2.0b1 will be
released today (in *some* US timezone :-).  The former from
python.org, the latter from pythonlabs.com.

The bad news is that there's still no agreement from Stallman that the
CNRI open source license is GPL-compatible.  See my previous post
here.  (Re: Conflict with the GPL.)  Given that we still don't know
that dual licensing will be necessary and sufficient to make the 2.0
license GPL-compatible, we decided not to go for dual licensing just
yet -- if it transpires later that it is necessary, we'll add it to
the 2.0 final license.

At this point, our best shot seems to be to arrange a meeting between
CNRI's lawyer and Stallman's lawyer.  Without the lawyers there, we
never seem to be able to get a commitment to an agreement.  CNRI is
willing to do this; Stallman's lawyer (Eben Moglen; he's a law
professor at Columbia U, not NYU as I previously mentioned) is even
harder to get a hold of than Stallman himself, so it may be a while.
Given CNRI's repeatedly expressed commitment to move this forward, I
don't want to hold up any of the releases that were planned for today
any longer.

So look forward to announcements later today, and get out the
(qualified) champagne...!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Tue Sep  5 15:17:36 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 16:17:36 +0200 (CEST)
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <200009051430.JAA19323@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 05, 2000 09:30:32 AM
Message-ID: <200009051417.QAA27424@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Folks,
> 
> After a Labor Day weekend ful excitement, I have good news and bad
> news.

Don'it worry about the bad news! :-)

> 
> The good news is that both Python 1.6 and Python 2.0b1 will be
> released today (in *some* US timezone :-).  The former from
> python.org, the latter from pythonlabs.com.

Great! w.r.t. the latest demand for help with patches, tell us
what & whom patches you want among those you know about.

> 
> The bad news is that there's still no agreement from Stallman that the
> CNRI open source license is GPL-compatible.

This is no surprise.  I don't think they will agree any time soon.
If they do so by the end of the year, that would make us happy, though.

> So look forward to announcements later today, and get out the
> (qualified) champagne...!

Ahem, which one?
Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Millésimé? :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From skip@mojam.com (Skip Montanaro)  Tue Sep  5 15:16:39 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 5 Sep 2000 09:16:39 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.53,1.54
In-Reply-To: <200009051242.FAA13258@slayer.i.sourceforge.net>
References: <200009051242.FAA13258@slayer.i.sourceforge.net>
Message-ID: <14773.71.989338.110654@beluga.mojam.com>

--uduWpl+bD4
Content-Type: text/plain; charset=us-ascii
Content-Description: message body text
Content-Transfer-Encoding: 7bit


    Guido> I could use help here!!!!  Please mail me patches ASAP.  We may have
    Guido> to put some of this off to 2.0final, but it's best to have it in shape
    Guido> now...

Attached.

Skip


--uduWpl+bD4
Content-Type: application/octet-stream
Content-Description: note about readline history
Content-Disposition: attachment;
	filename="news.patch"
Content-Transfer-Encoding: base64

SW5kZXg6IE1pc2MvTkVXUwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09ClJDUyBmaWxlOiAvY3Zzcm9vdC9weXRo
b24vcHl0aG9uL2Rpc3Qvc3JjL01pc2MvTkVXUyx2CnJldHJpZXZpbmcgcmV2aXNpb24gMS41
NApkaWZmIC1jIC1jIC1yMS41NCBORVdTCioqKiBNaXNjL05FV1MJMjAwMC8wOS8wNSAxMjo0
Mjo0NgkxLjU0Ci0tLSBNaXNjL05FV1MJMjAwMC8wOS8wNSAxNDoxNjowMwoqKioqKioqKioq
KioqKioKKioqIDI1NywyNjIgKioqKgotLS0gMjU3LDI2NSAtLS0tCiAgCiAgc29ja2V0IC0g
bmV3IGZ1bmN0aW9uIGdldGZxZG4oKQogIAorIHJlYWRsaW5lIC0gbmV3IGZ1bmN0aW9ucyB0
byByZWFkLCB3cml0ZSBhbmQgdHJ1bmNhdGUgaGlzdG9yeSBmaWxlcy4gIFRoZQorIHJlYWRs
aW5lIHNlY3Rpb24gb2YgdGhlIGxpYnJhcnkgcmVmZXJlbmNlIG1hbnVhbCBjb250YWlucyBh
biBleGFtcGxlLgorIAogIFhYWDogSSdtIHN1cmUgdGhlcmUgYXJlIG90aGVycwogIAogIAo=

--uduWpl+bD4--


From jeremy@beopen.com  Tue Sep  5 15:58:46 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 10:58:46 -0400 (EDT)
Subject: [Python-Dev] malloc restructuring in 1.6
Message-ID: <14773.2598.24665.940797@bitdiddle.concentric.net>

I'm editing the NEWS file for 2.0 and noticed that Vladimir's malloc
changes are listed as new for 2.0.  I think they actually went into
1.6, but I'm not certain.  Can anyone confirm?

Jeremy


From petrilli@amber.org  Tue Sep  5 16:19:05 2000
From: petrilli@amber.org (Christopher Petrilli)
Date: Tue, 5 Sep 2000 11:19:05 -0400
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <200009051417.QAA27424@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Sep 05, 2000 at 04:17:36PM +0200
References: <200009051430.JAA19323@cj20424-a.reston1.va.home.com> <200009051417.QAA27424@python.inrialpes.fr>
Message-ID: <20000905111904.A14540@trump.amber.org>

Vladimir Marangozov [Vladimir.Marangozov@inrialpes.fr] wrote:
> Ahem, which one?
> Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Millésimé? :-)

Given the involvement of Richard Stallman, and its similarity to a
peace accord during WWII, I'd vote for Pol Roger Sir Winston Churchill 
cuvee :-)

Chris

-- 
| Christopher Petrilli
| petrilli@amber.org


From Vladimir.Marangozov@inrialpes.fr  Tue Sep  5 16:38:47 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 17:38:47 +0200 (CEST)
Subject: [Python-Dev] malloc restructuring in 1.6
In-Reply-To: <14773.2598.24665.940797@bitdiddle.concentric.net> from "Jeremy Hylton" at Sep 05, 2000 10:58:46 AM
Message-ID: <200009051538.RAA27615@python.inrialpes.fr>

Jeremy Hylton wrote:
> 
> I'm editing the NEWS file for 2.0 and noticed that Vladimir's malloc
> changes are listed as new for 2.0.  I think they actually went into
> 1.6, but I'm not certain.  Can anyone confirm?

Yes, they're in 1.6.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Tue Sep  5 17:02:51 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 18:02:51 +0200 (CEST)
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <20000905111904.A14540@trump.amber.org> from "Christopher Petrilli" at Sep 05, 2000 11:19:05 AM
Message-ID: <200009051602.SAA27759@python.inrialpes.fr>

Christopher Petrilli wrote:
> 
> Vladimir Marangozov [Vladimir.Marangozov@inrialpes.fr] wrote:
> > Ahem, which one?
> > Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Millésimé? :-)
> 
> Given the involvement of Richard Stallman, and its similarity to a
> peace accord during WWII, I'd vote for Pol Roger Sir Winston Churchill 
> cuvee :-)
> 

Ah. That would have been my pleasure, but I am out of stock for this one.
Sorry. However, I'll make sure to order a bottle and keep it ready in my
cellar for the ratification of the final license. In the meantime, the
above is the best I can offer -- the rest is cheap stuff to be consumed
only on bad news <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From jeremy@beopen.com  Tue Sep  5 19:43:04 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 14:43:04 -0400 (EDT)
Subject: [Python-Dev] checkin messages that reference SF bugs or patches
Message-ID: <14773.16056.958855.185889@bitdiddle.concentric.net>

If you commit a change that closes an SF bug or patch, please write a
checkin message that describes the change independently of the
information stored in SF.  You should also reference the bug or patch
id, but the id alone is not sufficient.

I am working on the NEWS file for Python 2.0 and have found a few
checkin messages that just said "SF patch #010101."  It's tedious to
go find the closed patch entry and read all the discussion.  Let's
assume the person reading the CVS log does not have access to the SF
databases. 

Jeremy


From akuchlin@mems-exchange.org  Tue Sep  5 19:57:05 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Tue, 5 Sep 2000 14:57:05 -0400
Subject: [Python-Dev] Updated version of asyncore.py?
Message-ID: <20000905145705.A2512@kronos.cnri.reston.va.us>

asyncore.py in the CVS tree is revision 2.40 1999/05/27, while Sam
Rushing's most recent tarball contains revision 2.49 2000/05/04.  The
major change is that lots of methods in 2.49 have an extra optional
argument, map=None.  (I noticed the discrepancy while packaging ZEO,
which assumes the most recent version.)

asynchat.py is also slightly out of date: 
< #     Id: asynchat.py,v 2.23 1999/05/01 04:49:24 rushing Exp
---
> #     $Id: asynchat.py,v 2.25 1999/11/18 11:01:08 rushing Exp $

The CVS versions have additional docstrings and a few typo fixes in
comments.  Should the Python library versions be updated?  (+1 from
me, obviously.)

--amk


From martin@loewis.home.cs.tu-berlin.de  Tue Sep  5 21:46:16 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 5 Sep 2000 22:46:16 +0200
Subject: [Python-Dev] Re: urllib.URLopener does not work with proxies (Bug 110692)
Message-ID: <200009052046.WAA03605@loewis.home.cs.tu-berlin.de>

Hi Andrew,

This is likely incorrect usage of the module. The proxy argument must
be a dictionary mapping strings of protocol names to  strings of URLs.

Please confirm whether this was indeed the problem; if not, please add
more detail as to how exactly you had used the module.

See

http://sourceforge.net/bugs/?func=detailbug&bug_id=110692&group_id=5470

for the status of this report; it would be appreciated if you recorded
any comments on that page.

Regards,
Martin



From guido@cj20424-a.reston1.va.home.com  Tue Sep  5 19:49:38 2000
From: guido@cj20424-a.reston1.va.home.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 13:49:38 -0500
Subject: [Python-Dev] Python 1.6, the final release, is out!
Message-ID: <200009051849.NAA01719@cj20424-a.reston1.va.home.com>

------- Blind-Carbon-Copy

To: python-list@python.org (Python mailing list),
    python-announce-list@python.org
Subject: Python 1.6, the final release, is out!
From: Guido van Rossum <guido@beopen.com>
Date: Tue, 05 Sep 2000 13:49:38 -0500
Sender: guido@cj20424-a.reston1.va.home.com

OK folks, believe it or not, Python 1.6 is released.

Please go here to pick it up:

    http://www.python.org/1.6/

There's a tarball and a Windows installer, and a long list of new
features.

CNRI has placed an open source license on this version.  CNRI believes
that this version is compatible with the GPL, but there is a
technicality concerning the choice of law provision, which Richard
Stallman believes may make it incompatible.  CNRI is still trying to
work this out with Stallman.  Future versions of Python will be
released by BeOpen PythonLabs under a GPL-compatible license if at all
possible.

There's Only One Way To Do It.

- --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

------- End of Blind-Carbon-Copy


From martin@loewis.home.cs.tu-berlin.de  Tue Sep  5 23:03:16 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 6 Sep 2000 00:03:16 +0200
Subject: [Python-Dev] undefined symbol in custom interpeter (Bug 110701)
Message-ID: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>

Your PR is now being tracked at

http://sourceforge.net/bugs/?func=detailbug&bug_id=110701&group_id=5470

This is not a bug in Python. When linking a custom interpreter, you
need to make sure all symbols are exported to modules. On FreeBSD, you
do this by adding -Wl,--export-dynamic to the linker line.

Can someone please close this report?

Martin


From jeremy@beopen.com  Tue Sep  5 23:20:07 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 18:20:07 -0400 (EDT)
Subject: [Python-Dev] undefined symbol in custom interpeter (Bug 110701)
In-Reply-To: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>
References: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>
Message-ID: <14773.29079.142749.496111@bitdiddle.concentric.net>

Closed it.  Thanks.

Jeremy


From skip@mojam.com (Skip Montanaro)  Tue Sep  5 23:38:02 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 5 Sep 2000 17:38:02 -0500 (CDT)
Subject: [Python-Dev] Updated version of asyncore.py?
In-Reply-To: <20000905145705.A2512@kronos.cnri.reston.va.us>
References: <20000905145705.A2512@kronos.cnri.reston.va.us>
Message-ID: <14773.30154.924465.632830@beluga.mojam.com>

    Andrew> The CVS versions have additional docstrings and a few typo fixes
    Andrew> in comments.  Should the Python library versions be updated?
    Andrew> (+1 from me, obviously.)

+1 from me as well.  I think asyncore.py and asynchat.py are important
enough to a number of packages that we ought to make the effort to keep the
Python-distributed versions up-to-date.  I suspect adding Sam as a developer
would make keeping it updated in CVS much easier than in the past.

Skip


From guido@beopen.com  Wed Sep  6 05:49:27 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 23:49:27 -0500
Subject: [Python-Dev] Python 2.0b1 is released!
Message-ID: <200009060449.XAA02145@cj20424-a.reston1.va.home.com>

A unique event in all the history of Python: two releases on the same
day!  (At least in my timezone. :-)

Python 2.0b1 is released.  The BeOpen PythonLabs and our cast of
SourceForge volunteers have been working on this version on which
since May.  Please go here to pick it up:

    http://www.pythonlabs.com/tech/python2.0/

There's a tarball and a Windows installer, online documentation (with
a new color scheme :-), RPMs, and a long list of new features.  OK, a
teaser:

  - Augmented assignment, e.g. x += 1
  - List comprehensions, e.g. [x**2 for x in range(10)]
  - Extended import statement, e.g. import Module as Name
  - Extended print statement, e.g. print >> file, "Hello"
  - Optional collection of cyclical garbage

There's one bit of sad news: according to Richard Stallman, this
version is no more compatible with the GPL than version 1.6 that was
released this morning by CNRI, because of a technicality concerning
the choice of law provision in the CNRI license.  Because 2.0b1 has to
be considered a derivative work of 1.6, this technicality in the CNRI
license applies to 2.0 too (and to any other derivative works of 1.6).
CNRI is still trying to work this out with Stallman, so I hope that we
will be able to release future versions of Python under a
GPL-compatible license.

There's Only One Way To Do It.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From cgw@fnal.gov  Wed Sep  6 15:31:11 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 09:31:11 -0500 (CDT)
Subject: [Python-Dev] newimp.py
Message-ID: <14774.21807.691920.988409@buffalo.fnal.gov>

Installing the brand-new 2.0b1 I see this:

Compiling /usr/lib/python2.0/newimp.py ...
  File "/usr/lib/python2.0/newimp.py", line 137
    envDict[varNm] = val
                        ^
And attempting to import it gives me:

Python 2.0b1 (#14, Sep  6 2000, 09:24:44) 
[GCC 2.96 20000905 (experimental)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import newimp
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/newimp.py", line 1567, in ?
    init()
  File "/usr/lib/python2.0/newimp.py", line 203, in init
    if (not aMod.__dict__.has_key(PKG_NM)) or full_reset:
AttributeError: 'None' object has no attribute '__dict__'

This code was last touched on 1995/07/12.  It looks defunct to me.
Should it be removed from the distribution or should I spend the time
to fix it?




From skip@mojam.com (Skip Montanaro)  Wed Sep  6 16:12:56 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 6 Sep 2000 10:12:56 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.21807.691920.988409@buffalo.fnal.gov>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
Message-ID: <14774.24312.78161.249542@beluga.mojam.com>

    Charles> This code was last touched on 1995/07/12.  It looks defunct to
    Charles> me.  Should it be removed from the distribution or should I
    Charles> spend the time to fix it?

Charles,

Try deleting /usr/lib/python2.0/newimp.py, then do a re-install.  (Actually,
perhaps you should delete *.py in that directory and selectively delete
subdirectories as well.)  I don't see newimp.py anywhere in the 2.0b1 tree.

Skip


From cgw@fnal.gov  Wed Sep  6 18:56:44 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 12:56:44 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.24312.78161.249542@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
 <14774.24312.78161.249542@beluga.mojam.com>
Message-ID: <14774.34140.432485.450929@buffalo.fnal.gov>

Skip Montanaro writes:

 > Try deleting /usr/lib/python2.0/newimp.py, then do a re-install.  (Actually,
 > perhaps you should delete *.py in that directory and selectively delete
 > subdirectories as well.)  I don't see newimp.py anywhere in the 2.0b1 tree.

Something is really screwed up with CVS, or my understanding of it.
Look at this transcript:

buffalo:Lib$ pwd
/usr/local/src/Python-CVS/python/dist/src/Lib

buffalo:Lib$ rm newimp.py                                                      

buffalo:Lib$ cvs status newimp.py                                              
===================================================================
File: no file newimp.py         Status: Needs Checkout

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/newimp.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)

buffalo:Lib$ cvs update -dAP                                                   
cvs server: Updating .
U newimp.py
<rest of update output omitted>

buffalo:Lib$ ls -l newimp.py                                                   
-rwxr-xr-x   1 cgw      g023        54767 Sep  6 12:50 newimp.py

buffalo:Lib$ cvs status newimp.py 
===================================================================
File: newimp.py         Status: Up-to-date

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/newimp.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)

If I edit the CVS/Entries file and remove "newimp.py" from there, the
problem goes away.  But I work with many CVS repositories, and the
Python repository at SourceForge is the only one that forces me to
manually edit the Entries file.  You're really not supposed to need to
do that!

I'm running CVS version 1.10.6.  I think 1.10.6 is supposed to be a
"good" version to use.  What are other people using?  Does everybody
just go around editing CVS/Entries whenever files are removed from the
repository?  What am I doing wrong?  I'm starting to get a little
annoyed by the SourceForge CVS server.  Is it just me?






From nascheme@enme.ucalgary.ca  Wed Sep  6 19:06:29 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 6 Sep 2000 12:06:29 -0600
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.34140.432485.450929@buffalo.fnal.gov>; from Charles G Waldman on Wed, Sep 06, 2000 at 12:56:44PM -0500
References: <14774.21807.691920.988409@buffalo.fnal.gov> <14774.24312.78161.249542@beluga.mojam.com> <14774.34140.432485.450929@buffalo.fnal.gov>
Message-ID: <20000906120629.B1977@keymaster.enme.ucalgary.ca>

On Wed, Sep 06, 2000 at 12:56:44PM -0500, Charles G Waldman wrote:
> Something is really screwed up with CVS, or my understanding of it.

The latter I believe unless I completely misunderstand your transcript.
Look at "cvs remove".

  Neil


From cgw@fnal.gov  Wed Sep  6 19:19:50 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 13:19:50 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <20000906120629.B1977@keymaster.enme.ucalgary.ca>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
 <14774.24312.78161.249542@beluga.mojam.com>
 <14774.34140.432485.450929@buffalo.fnal.gov>
 <20000906120629.B1977@keymaster.enme.ucalgary.ca>
Message-ID: <14774.35526.470896.324060@buffalo.fnal.gov>

Neil wrote:
 
 >Look at "cvs remove".

Sorry, I must have my "stupid" bit set today (didn't sleep enough last
night).  Do you mean that I'm supposed to cvs remove the file?  AFAIK,
when I do a "cvs update" that should remove all files that are no
longer pertinent.  Guido (or somebody else with CVS write access) does
the "cvs remove" and "cvs commit", and then when I do my next 
"cvs update" my local copy of the file should be removed.  At least
that's the way it works with all the other projects I track via CVS.

And of course if I try to "cvs remove newimp.py", I get: 

cvs [server aborted]: "remove" requires write access to the repository

as I would expect.

Or are you simply telling me that if I read the documentation on the
"cvs remove" command, the scales will fall from my eyes?  I've read
it, and it doesn't help :-(

Sorry for bugging everybody with my stupid CVS questions.  But I do
really think that something is screwy with the CVS repository.  And
I've never seen *any* documentation which suggests that you need to
manually edit the CVS/Entries file, which was Fred Drake's suggested
fix last time I reported such a problem with CVS.

Oh well, if this only affects me, then I guess the burden of proof is
on me.  Meanwhile I guess I just have to remember that I can't really
trust CVS to delete obsolete files.





From skip@mojam.com (Skip Montanaro)  Wed Sep  6 19:49:56 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 6 Sep 2000 13:49:56 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.35526.470896.324060@buffalo.fnal.gov>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
 <14774.24312.78161.249542@beluga.mojam.com>
 <14774.34140.432485.450929@buffalo.fnal.gov>
 <20000906120629.B1977@keymaster.enme.ucalgary.ca>
 <14774.35526.470896.324060@buffalo.fnal.gov>
Message-ID: <14774.37332.534262.200618@beluga.mojam.com>

    Charles> Oh well, if this only affects me, then I guess the burden of
    Charles> proof is on me.  Meanwhile I guess I just have to remember that
    Charles> I can't really trust CVS to delete obsolete files.

Charles,

I'm not sure what to make of your problem.  I can't reproduce it.  On the
Linux systems from which I track the CVS repository, I run cvs 1.10.6,
1.10.7 and 1.10.8 and haven't had seen the problem you describe.  I checked
six different Python trees on four different machines for evidence of
Lib/newimp.py.  One of the trees still references cvs.python.org and hasn't
been updated since September 4, 1999.  Even it doesn't have a Lib/newimp.py
file.  I believe the demise of Lib/newimp.py predates the creation of the
SourceForge CVS repository by quite awhile.

You might try executing cvs checkout in a fresh directory and compare that
with your problematic tree.

Skip


From cgw@fnal.gov  Wed Sep  6 20:10:48 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 14:10:48 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.37332.534262.200618@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
 <14774.24312.78161.249542@beluga.mojam.com>
 <14774.34140.432485.450929@buffalo.fnal.gov>
 <20000906120629.B1977@keymaster.enme.ucalgary.ca>
 <14774.35526.470896.324060@buffalo.fnal.gov>
 <14774.37332.534262.200618@beluga.mojam.com>
Message-ID: <14774.38584.869242.974864@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 > I'm not sure what to make of your problem.  I can't reproduce it.  On the
 > Linux systems from which I track the CVS repository, I run cvs 1.10.6,
 > 1.10.7 and 1.10.8 and haven't had seen the problem you describe.

How about if you go to one of those CVS trees, cd Lib, and type
"cvs update newimp.py" ?

If I check out a new tree, "newimp.py" is indeed not there.  But if I
do "cvs update newimp.py" it appears.  I am sure that this is *not*
the correct behavior for CVS.  If a file has been cvs remove'd, then
updating it should not cause it to appear in my local repository.





From cgw@fnal.gov  Wed Sep  6 21:40:47 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 15:40:47 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.43898.548664.200202@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
 <14774.24312.78161.249542@beluga.mojam.com>
 <14774.34140.432485.450929@buffalo.fnal.gov>
 <20000906120629.B1977@keymaster.enme.ucalgary.ca>
 <14774.35526.470896.324060@buffalo.fnal.gov>
 <14774.37332.534262.200618@beluga.mojam.com>
 <14774.38584.869242.974864@buffalo.fnal.gov>
 <14774.43898.548664.200202@beluga.mojam.com>
Message-ID: <14774.43983.70263.934682@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 >     Charles> How about if you go to one of those CVS trees, cd Lib, and type
 >     Charles> "cvs update newimp.py" ?
 > 
 > I get 
 > 
 >     beluga:Lib% cd ~/src/python/dist/src/Lib/
 >     beluga:Lib% cvs update newinp.py
 >     cvs server: nothing known about newinp.py

That's because you typed "newinp", not "newimp".  Try it with an "M"
and see what happens.

    -C



From Fredrik Lundh" <effbot@telia.com  Wed Sep  6 22:17:37 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 6 Sep 2000 23:17:37 +0200
Subject: [Python-Dev] newimp.py
References: <14774.21807.691920.988409@buffalo.fnal.gov><14774.24312.78161.249542@beluga.mojam.com><14774.34140.432485.450929@buffalo.fnal.gov><20000906120629.B1977@keymaster.enme.ucalgary.ca><14774.35526.470896.324060@buffalo.fnal.gov><14774.37332.534262.200618@beluga.mojam.com><14774.38584.869242.974864@buffalo.fnal.gov><14774.43898.548664.200202@beluga.mojam.com> <14774.43983.70263.934682@buffalo.fnal.gov>
Message-ID: <04bd01c01847$e9a197c0$766940d5@hagrid>

charles wrote:
>  >     Charles> How about if you go to one of those CVS trees, cd Lib, and type
>  >     Charles> "cvs update newimp.py" ?

why do you keep doing that? ;-)

> That's because you typed "newinp", not "newimp".  Try it with an "M"
> and see what happens.

the file has state "Exp".  iirc, it should be "dead" for CVS
to completely ignore it.

guess it was removed long before the CVS repository was
moved to source forge, and that something went wrong
somewhere in the process...

</F>



From cgw@fnal.gov  Wed Sep  6 22:08:09 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 16:08:09 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.44642.258108.758548@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
 <14774.24312.78161.249542@beluga.mojam.com>
 <14774.34140.432485.450929@buffalo.fnal.gov>
 <20000906120629.B1977@keymaster.enme.ucalgary.ca>
 <14774.35526.470896.324060@buffalo.fnal.gov>
 <14774.37332.534262.200618@beluga.mojam.com>
 <14774.38584.869242.974864@buffalo.fnal.gov>
 <14774.43898.548664.200202@beluga.mojam.com>
 <14774.43983.70263.934682@buffalo.fnal.gov>
 <14774.44642.258108.758548@beluga.mojam.com>
Message-ID: <14774.45625.177110.349575@buffalo.fnal.gov>

Skip Montanaro writes:

 > Ah, yes, I get something:
 > 
 >     beluga:Lib% cvs update newimp.py
 >     U newimp.py
 >     beluga:Lib% ls -l newimp.py 
 >     -rwxrwxr-x    1 skip     skip        54767 Jul 12  1995 newimp.py

 > Why newimp.py is still available, I have no idea.  Note the beginning of the
 > module's doc string:

It's clear that the file is quite obsolete.  It's been moved to the
Attic, and the most recent tag on it is r13beta1.

What's not clear is why "cvs update" still fetches it.

Something is way screwy with SourceForge's CVS server, I'm tellin' ya!

Maybe it's running on a Linux box and uses the pthreads library?  ;-)

I guess since the server is at SourceForge, it's not really under
immediate control of anybody at either python.org or
BeOpen/PythonLabs, so it doesn't seem very likely that this will get
looked into anytime soon.  Sigh....





From guido@beopen.com  Thu Sep  7 04:07:09 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 06 Sep 2000 22:07:09 -0500
Subject: [Python-Dev] newimp.py
In-Reply-To: Your message of "Wed, 06 Sep 2000 23:17:37 +0200."
 <04bd01c01847$e9a197c0$766940d5@hagrid>
References: <14774.21807.691920.988409@buffalo.fnal.gov><14774.24312.78161.249542@beluga.mojam.com><14774.34140.432485.450929@buffalo.fnal.gov><20000906120629.B1977@keymaster.enme.ucalgary.ca><14774.35526.470896.324060@buffalo.fnal.gov><14774.37332.534262.200618@beluga.mojam.com><14774.38584.869242.974864@buffalo.fnal.gov><14774.43898.548664.200202@beluga.mojam.com> <14774.43983.70263.934682@buffalo.fnal.gov>
 <04bd01c01847$e9a197c0$766940d5@hagrid>
Message-ID: <200009070307.WAA07393@cj20424-a.reston1.va.home.com>

> the file has state "Exp".  iirc, it should be "dead" for CVS
> to completely ignore it.
> 
> guess it was removed long before the CVS repository was
> moved to source forge, and that something went wrong
> somewhere in the process...

Could've been an old version of CVS.

Anyway, I checked it out, rm'ed it, cvs-rm'ed it, and committed it --
that seems to have taken care of it.

I hope the file wasn't in any beta distribution.  Was it?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From sjoerd@oratrix.nl  Thu Sep  7 11:40:28 2000
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Thu, 07 Sep 2000 12:40:28 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules cPickle.c,2.50,2.51
In-Reply-To: Your message of Wed, 06 Sep 2000 17:11:43 -0700.
 <200009070011.RAA09907@slayer.i.sourceforge.net>
References: <200009070011.RAA09907@slayer.i.sourceforge.net>
Message-ID: <20000907104029.2B35031047C@bireme.oratrix.nl>

This doesn't work.  Neither m nor d are initialized at this point.

On Wed, Sep 6 2000 Guido van Rossum wrote:

> Update of /cvsroot/python/python/dist/src/Modules
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv9746
> 
> Modified Files:
> 	cPickle.c 
> Log Message:
> Simple fix from Jin Fulton to avoid returning a half-initialized
> module when e.g. copy_reg.py doesn't exist.  This caused a core dump.
> 
> This closes SF bug 112944.
> 
> 
> Index: cPickle.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Modules/cPickle.c,v
> retrieving revision 2.50
> retrieving revision 2.51
> diff -C2 -r2.50 -r2.51
> *** cPickle.c	2000/08/12 20:58:11	2.50
> --- cPickle.c	2000/09/07 00:11:40	2.51
> ***************
> *** 4522,4525 ****
> --- 4522,4527 ----
>       PyObject *compatible_formats;
>   
> +     if (init_stuff(m, d) < 0) return;
> + 
>       Picklertype.ob_type = &PyType_Type;
>       Unpicklertype.ob_type = &PyType_Type;
> ***************
> *** 4543,4547 ****
>       Py_XDECREF(format_version);
>       Py_XDECREF(compatible_formats);
> - 
> -     init_stuff(m, d);
>   }
> --- 4545,4547 ----
> 
> 

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From thomas.heller@ion-tof.com  Thu Sep  7 14:42:01 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Thu, 7 Sep 2000 15:42:01 +0200
Subject: [Python-Dev] SF checkin policies
Message-ID: <02a401c018d1$669fbcf0$4500a8c0@thomasnb>

What are the checkin policies to the sourceforge
CVS repository?

Now that I have checkin rights (for the distutils),
I'm about to checkin new versions of the bdist_wininst
command. This is still under active development.

Should CVS only contain complete, working versions?
Or are intermediate, nonworking versions allowed?
Will a warning be given here on python-dev just before
a new (beta) distribution is created?

Thomas Heller





From fredrik@pythonware.com  Thu Sep  7 15:04:13 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 16:04:13 +0200
Subject: [Python-Dev] SF checkin policies
References: <02a401c018d1$669fbcf0$4500a8c0@thomasnb>
Message-ID: <025501c018d4$81301800$0900a8c0@SPIFF>

> What are the checkin policies to the sourceforge
> CVS repository?

http://python.sourceforge.net/peps/pep-0200.html

    Use good sense when committing changes.  You should know what we
    mean by good sense or we wouldn't have given you commit privileges
    <0.5 wink>.

    /.../

    Any significant new feature must be described in a PEP and
    approved before it is checked in.

    /.../

    Any significant code addition, such as a new module or large
    patch, must include test cases for the regression test and
    documentation.  A patch should not be checked in until the tests
    and documentation are ready.

    /.../

    It is not acceptable for any checked in code to cause the
    regression test to fail.  If a checkin causes a failure, it must
    be fixed within 24 hours or it will be backed out.

</F>



From guido@beopen.com  Thu Sep  7 16:50:25 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 10:50:25 -0500
Subject: [Python-Dev] SF checkin policies
In-Reply-To: Your message of "Thu, 07 Sep 2000 15:42:01 +0200."
 <02a401c018d1$669fbcf0$4500a8c0@thomasnb>
References: <02a401c018d1$669fbcf0$4500a8c0@thomasnb>
Message-ID: <200009071550.KAA09309@cj20424-a.reston1.va.home.com>

> What are the checkin policies to the sourceforge
> CVS repository?
> 
> Now that I have checkin rights (for the distutils),
> I'm about to checkin new versions of the bdist_wininst
> command. This is still under active development.
> 
> Should CVS only contain complete, working versions?
> Or are intermediate, nonworking versions allowed?
> Will a warning be given here on python-dev just before
> a new (beta) distribution is created?

Please check in only working, tested code!  There are lots of people
(also outside the developers group) who do daily checkouts.  If they
get broken code, they'll scream hell!

We publicize and discuss the release schedule pretty intensely here so
you should have plenty of warning.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Thu Sep  7 16:59:40 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 7 Sep 2000 17:59:40 +0200 (CEST)
Subject: [Python-Dev] newimp.py
In-Reply-To: <200009070307.WAA07393@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 06, 2000 10:07:09 PM
Message-ID: <200009071559.RAA06832@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Anyway, I checked it out, rm'ed it, cvs-rm'ed it, and committed it --
> that seems to have taken care of it.
> 
> I hope the file wasn't in any beta distribution.  Was it?

No. There's a .cvsignore file in the root directory of the latest
tarball, though. Not a big deal.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Thu Sep  7 17:46:11 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 7 Sep 2000 18:46:11 +0200 (CEST)
Subject: [Python-Dev] python -U fails
Message-ID: <200009071646.SAA07004@python.inrialpes.fr>

Seen on c.l.py (import site fails due to eval on an unicode string):

~/python/Python-2.0b1>python -U
'import site' failed; use -v for traceback
Python 2.0b1 (#2, Sep  7 2000, 12:59:53) 
[GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> eval (u"1+2")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: eval() argument 1 must be string or code object
>>> 

The offending eval is in os.py

Traceback (most recent call last):
  File "./Lib/site.py", line 60, in ?
    import sys, os
  File "./Lib/os.py", line 331, in ?
    if _exists("fork") and not _exists("spawnv") and _exists("execv"):
  File "./Lib/os.py", line 325, in _exists
    eval(name)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From akuchlin@mems-exchange.org  Thu Sep  7 21:01:44 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 07 Sep 2000 16:01:44 -0400
Subject: [Python-Dev] hasattr() and Unicode strings
Message-ID: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>

hasattr(), getattr(), and doubtless other built-in functions
don't accept Unicode strings at all:

>>> import sys
>>> hasattr(sys, u'abc')
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: hasattr, argument 2: expected string, unicode found

Is this a bug or a feature?  I'd say bug; the Unicode should be
coerced using the default ASCII encoding, and an exception raised if
that isn't possible.

--amk


From fdrake@beopen.com  Thu Sep  7 21:02:52 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 7 Sep 2000 16:02:52 -0400 (EDT)
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
Message-ID: <14775.62572.442732.589738@cj42289-a.reston1.va.home.com>

Andrew Kuchling writes:
 > Is this a bug or a feature?  I'd say bug; the Unicode should be
 > coerced using the default ASCII encoding, and an exception raised if
 > that isn't possible.

  I agree.
  Marc-Andre, what do you think?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From martin@loewis.home.cs.tu-berlin.de  Thu Sep  7 21:08:45 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 7 Sep 2000 22:08:45 +0200
Subject: [Python-Dev] xml missing in Windows installer?
Message-ID: <200009072008.WAA00862@loewis.home.cs.tu-berlin.de>

Using the 2.0b1 Windows installer from BeOpen, I could not find
Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
this intentional? Did I miss something?

Regargds,
Martin



From Fredrik Lundh" <effbot@telia.com  Thu Sep  7 21:25:02 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 22:25:02 +0200
Subject: [Python-Dev] xml missing in Windows installer?
References: <200009072008.WAA00862@loewis.home.cs.tu-berlin.de>
Message-ID: <004c01c01909$b832a220$766940d5@hagrid>

martin wrote:

> Using the 2.0b1 Windows installer from BeOpen, I could not find
> Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
> this intentional? Did I miss something?

Date: Thu, 7 Sep 2000 01:34:04 -0700
From: Tim Peters <tim_one@users.sourceforge.net>
To: python-checkins@python.org
Subject: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.15,1.16

Update of /cvsroot/python/python/dist/src/PCbuild
In directory slayer.i.sourceforge.net:/tmp/cvs-serv31884

Modified Files:
 python20.wse 
Log Message:
Windows installer, reflecting changes that went into a replacement 2.0b1
.exe that will show up on PythonLabs.com later today:
    Include the Lib\xml\ package (directory + subdirectories).
    Include the Lib\lib-old\ directory.
    Include the Lib\test\*.xml test cases (well, just one now).
    Remove the redundant install of Lib\*.py (looks like a stray duplicate
        line that's been there a long time).  Because of this, the new
        installer is a little smaller despite having more stuff in it.

...

</F>



From guido@beopen.com  Thu Sep  7 22:32:16 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 16:32:16 -0500
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: Your message of "Thu, 07 Sep 2000 16:01:44 -0400."
 <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
Message-ID: <200009072132.QAA10047@cj20424-a.reston1.va.home.com>

> hasattr(), getattr(), and doubtless other built-in functions
> don't accept Unicode strings at all:
> 
> >>> import sys
> >>> hasattr(sys, u'abc')
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> TypeError: hasattr, argument 2: expected string, unicode found
> 
> Is this a bug or a feature?  I'd say bug; the Unicode should be
> coerced using the default ASCII encoding, and an exception raised if
> that isn't possible.

Agreed.

There are probably a bunch of things that need to be changed before
thois works though; getattr() c.s. require a string, then call
PyObject_GetAttr() which also checks for a string unless the object
supports tp_getattro -- but that's only true for classes and
instances.

Also, should we convert the string to 8-bit, or should we allow
Unicode attribute names?

It seems there's no easy fix -- better address this after 2.0 is
released.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Thu Sep  7 21:26:28 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 7 Sep 2000 22:26:28 +0200
Subject: [Python-Dev] Naming of config.h
Message-ID: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>

The fact that Python installs its config.h as
<prefix>/python2.0/config.h is annoying if one tries to combine Python
with some other autoconfiscated package.

If you configure that other package, it detects that it needs to add
-I/usr/local/include/python2.0; it also provides its own
config.h. When compiling the files

#include "config.h"

could then mean either one or the other. That can cause quite some
confusion: if the one of the package is used, LONG_LONG might not
exist, even though it should on that port.

This issue can be relaxed by renaming the "config.h" to
"pyconfig.h". That still might result in duplicate defines, but likely
SIZE_FLOAT (for example) has the same value in all definitions.

Regards,
Martin



From gstein@lyra.org  Thu Sep  7 21:41:12 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 7 Sep 2000 13:41:12 -0700
Subject: [Python-Dev] Naming of config.h
In-Reply-To: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>; from martin@loewis.home.cs.tu-berlin.de on Thu, Sep 07, 2000 at 10:26:28PM +0200
References: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>
Message-ID: <20000907134112.W3278@lyra.org>

On Thu, Sep 07, 2000 at 10:26:28PM +0200, Martin v. Loewis wrote:
>...
> This issue can be relaxed by renaming the "config.h" to
> "pyconfig.h". That still might result in duplicate defines, but likely
> SIZE_FLOAT (for example) has the same value in all definitions.

This is not a simple problem. APR (a subcomponent of Apache) is set up to
build as an independent library. It is also autoconf'd, but it goes through
a *TON* of work to avoid passing any autoconf symbols into the public space.

Renaming the config.h file would be an interesting start, but it won't solve
the conflicting symbols (or typedefs!) problem. And from a portability
standpoint, that is important: some compilers don't like redefinitions, even
if they are the same.

IOW, if you want to make this "correct", then plan on setting aside a good
chunk of time.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From guido@beopen.com  Thu Sep  7 22:57:39 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 16:57:39 -0500
Subject: [Python-Dev] newimp.py
In-Reply-To: Your message of "Thu, 07 Sep 2000 17:59:40 +0200."
 <200009071559.RAA06832@python.inrialpes.fr>
References: <200009071559.RAA06832@python.inrialpes.fr>
Message-ID: <200009072157.QAA10441@cj20424-a.reston1.va.home.com>

> No. There's a .cvsignore file in the root directory of the latest
> tarball, though. Not a big deal.

Typically we leave all the .cvsignore files in.  They don't hurt
anybody, and getting rid of them manually is just a pain.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From akuchlin@mems-exchange.org  Thu Sep  7 22:27:03 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 7 Sep 2000 17:27:03 -0400
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: <200009072132.QAA10047@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 07, 2000 at 04:32:16PM -0500
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <200009072132.QAA10047@cj20424-a.reston1.va.home.com>
Message-ID: <20000907172703.A1095@kronos.cnri.reston.va.us>

On Thu, Sep 07, 2000 at 04:32:16PM -0500, Guido van Rossum wrote:
>It seems there's no easy fix -- better address this after 2.0 is
>released.

OK; I'll file a bug report on SourceForge so this doesn't get forgotten.

--amk


From fdrake@beopen.com  Thu Sep  7 22:26:18 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 7 Sep 2000 17:26:18 -0400 (EDT)
Subject: [Python-Dev] New PDF documentation & Windows installer
Message-ID: <14776.2042.985615.611778@cj42289-a.reston1.va.home.com>

  As many people noticed, there was a problem with the PDF files
generated for the recent Python 2.0b1 release.  I've found & corrected
the problem, and uploaded new packages to the Web site.  Please get
new PDF files from:

	http://www.pythonlabs.com/tech/python2.0/download.html

  The new files show a date of September 7, 2000, rather than
September 5, 2000.
  An updated Windows installer is available which actually installs
the XML package.
  I'm sorry for any inconvenience these problems have caused.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Thu Sep  7 22:43:28 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 23:43:28 +0200
Subject: [Python-Dev] update: tkinter problems on win95
Message-ID: <004101c01914$ae501ca0$766940d5@hagrid>

just fyi, I've now reduced the problem to two small C programs:
one program initializes Tcl and Tk in the same way as Tkinter --
and the program hangs in the same way as Tkinter (most likely
inside some finalization code that's called from DllMain).

the other does things in the same way as wish, and it never
hangs...

:::

still haven't figured out exactly what's different, but it's clearly
a problem with _tkinter's initialization code, and nothing else.  I'll
post a patch as soon as I have one...

</F>



From barry@scottb.demon.co.uk  Fri Sep  8 00:02:32 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Fri, 8 Sep 2000 00:02:32 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <004c01c01909$b832a220$766940d5@hagrid>
Message-ID: <000901c0191f$b48d65e0$060210ac@private>

Please don't release new kits with identical names/versions as old kits.

How do you expect anyone to tell if they have the fix or not?

Finding and fixing bugs show you care about quality.
Stealth releases negate the benefit.

	Barry


> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Fredrik Lundh
> Sent: 07 September 2000 21:25
> To: Martin v. Loewis
> Cc: python-dev@python.org
> Subject: Re: [Python-Dev] xml missing in Windows installer?
> 
> 
> martin wrote:
> 
> > Using the 2.0b1 Windows installer from BeOpen, I could not find
> > Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
> > this intentional? Did I miss something?
> 
> Date: Thu, 7 Sep 2000 01:34:04 -0700
> From: Tim Peters <tim_one@users.sourceforge.net>
> To: python-checkins@python.org
> Subject: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.15,1.16
> 
> Update of /cvsroot/python/python/dist/src/PCbuild
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv31884
> 
> Modified Files:
>  python20.wse 
> Log Message:
> Windows installer, reflecting changes that went into a replacement 2.0b1
> .exe that will show up on PythonLabs.com later today:
>     Include the Lib\xml\ package (directory + subdirectories).
>     Include the Lib\lib-old\ directory.
>     Include the Lib\test\*.xml test cases (well, just one now).
>     Remove the redundant install of Lib\*.py (looks like a stray duplicate
>         line that's been there a long time).  Because of this, the new
>         installer is a little smaller despite having more stuff in it.
> 
> ...
> 
> </F>
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From gward@mems-exchange.org  Fri Sep  8 00:16:56 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Thu, 7 Sep 2000 19:16:56 -0400
Subject: [Python-Dev] Noisy test_gc
Message-ID: <20000907191655.A9664@ludwig.cnri.reston.va.us>

Just built 2.0b1, and noticed that the GC test script is rather noisy:

  ...
  test_gc
  gc: collectable <list 0x818cf54>
  gc: collectable <dictionary 0x822f8b4>
  gc: collectable <list 0x818cf54>
  gc: collectable <tuple 0x822f484>
  gc: collectable <class 0x822f8b4>
  gc: collectable <dictionary 0x822f8e4>
  gc: collectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fb6c>
  gc: collectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fb9c>
  gc: collectable <instance method 0x81432bc>
  gc: collectable <B instance at 0x822f0d4>
  gc: collectable <dictionary 0x822fc9c>
  gc: uncollectable <dictionary 0x822fc34>
  gc: uncollectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fbcc>
  gc: collectable <function 0x8230fb4>
  test_gdbm
  ...

which is the same as it was the last time I built from CVS, but I would
have thought this should go away for a real release...

        Greg


From guido@beopen.com  Fri Sep  8 02:07:58 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 20:07:58 -0500
Subject: [Python-Dev] GPL license issues hit Linux Today
Message-ID: <200009080107.UAA11841@cj20424-a.reston1.va.home.com>

http://linuxtoday.com/news_story.php3?ltsn=2000-09-07-001-21-OS-CY-DB

Plus my response

http://linuxtoday.com/news_story.php3?ltsn=2000-09-07-011-21-OS-CY-SW

I'll be off until Monday, relaxing at the beach!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  8 01:14:07 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 02:14:07 +0200 (CEST)
Subject: [Python-Dev] Noisy test_gc
In-Reply-To: <20000907191655.A9664@ludwig.cnri.reston.va.us> from "Greg Ward" at Sep 07, 2000 07:16:56 PM
Message-ID: <200009080014.CAA07599@python.inrialpes.fr>

Greg Ward wrote:
> 
> Just built 2.0b1, and noticed that the GC test script is rather noisy:

The GC patch at SF makes it silent. It will be fixed for the final release.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From gward@python.net  Fri Sep  8 03:40:07 2000
From: gward@python.net (Greg Ward)
Date: Thu, 7 Sep 2000 22:40:07 -0400
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
Message-ID: <20000907224007.A959@beelzebub>

Hey all --

this is a bug I noticed in 1.5.2 ages ago, and never investigated
further.  I've just figured it out a little bit more; right now I can
only verify it in 1.5, as I don't have the right sort of 1.6 or 2.0
installation at home.  So if this has been fixed, I'll just shut up.

Bottom line: if you have an installation where prefix != exec-prefix,
and there is another Python installation on the system, then Python
screws up finding the landmark file (string.py in 1.5.2) and computes
the wrong prefix and exec-prefix.

Here's the scenario: I have a Red Hat 6.2 installation with the
"official" Red Hat python in /usr/bin/python.  I have a local build
installed with prefix=/usr/local/python and
exec-prefix=/usr/local/python.i86-linux; /usr/local/bin/python is a
symlink to ../python.i86-linux/bin/python.  (This dates to my days of
trying to understand what gets installed where.  Now, of course, I could
tell you what Python installs where in my sleep with one hand tied
behind my back... ;-)

Witness:
  $ /usr/bin/python -c "import sys ; print sys.prefix"
  /usr
  $/usr/local/bin/python -c "import sys ; print sys.prefix"
  /usr

...even though /usr/local/bin/python's library is really in
/usr/local/python/lib/python1.5 and
/usr/local/python.i86-linux/lib/python1.5.

If I erase Red Hat's Python, then /usr/local/bin/python figures out its
prefix correctly.

Using "strace" sheds a little more light on things; here's what I get
after massaging the "strace" output a bit (grep for "string.py"; all
that shows up are 'stat()' calls, where only the last succeeds; I've
stripped out everything but the filename):

  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.pyc
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.pyc
  /usr/local/bin/../lib/python1.5/string.py
  /usr/local/bin/../lib/python1.5/string.pyc
  /usr/local/bin/lib/python1.5/string.py
  /usr/local/bin/lib/python1.5/string.pyc
  /usr/local/lib/python1.5/string.py
  /usr/local/lib/python1.5/string.pyc
  /usr/lib/python1.5/string.py                # success because of Red Hat's
                                              # Python installation

Well, of course.  Python doesn't know what its true prefix is until it
has found its landmark file, but it can't find its landmark until it
knows its true prefix.  Here's the "strace" output after erasing Red
Hat's Python RPM:

  $ strace /usr/local/bin/python -c 1 2>&1 | grep 'string\.py'
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.pyc
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.pyc
  /usr/local/bin/../lib/python1.5/string.py
  /usr/local/bin/../lib/python1.5/string.pyc
  /usr/local/bin/lib/python1.5/string.py
  /usr/local/bin/lib/python1.5/string.pyc
  /usr/local/lib/python1.5/string.py
  /usr/local/lib/python1.5/string.pyc
  /usr/lib/python1.5/string.py               # now fail since I removed 
  /usr/lib/python1.5/string.pyc              # Red Hat's RPM
  /usr/local/python/lib/python1.5/string.py

A-ha!  When the /usr installation is no longer there to fool it, Python
then looks in the right place.

So, has this bug been fixed in 1.6 or 2.0?  If not, where do I look?

        Greg

PS. what about hard-coding a prefix and exec-prefix in the binary, and
only searching for the landmark if the hard-coded values fail?  That
way, this complicated and expensive search is only done if the
installation has been relocated.

-- 
Greg Ward                                      gward@python.net
http://starship.python.net/~gward/


From jeremy@beopen.com  Fri Sep  8 04:13:09 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 7 Sep 2000 23:13:09 -0400 (EDT)
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
In-Reply-To: <20000907224007.A959@beelzebub>
References: <20000907224007.A959@beelzebub>
Message-ID: <14776.22853.316652.994320@bitdiddle.concentric.net>

>>>>> "GW" == Greg Ward <gward@python.net> writes:

  GW> PS. what about hard-coding a prefix and exec-prefix in the
  GW> binary, and only searching for the landmark if the hard-coded
  GW> values fail?  That way, this complicated and expensive search is
  GW> only done if the installation has been relocated.

I've tried not to understand much about the search process.  I know
that it is slow (relatively speaking) and that it can be avoided by
setting the PYTHONHOME environment variable.

Jeremy


From MarkH@ActiveState.com  Fri Sep  8 05:02:07 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 8 Sep 2000 15:02:07 +1100
Subject: [Python-Dev] win32all-133 for Python 1.6, and win32all-134 for Python 2.0
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEGJDIAA.MarkH@ActiveState.com>

FYI - I'm updating the starship pages, and will make an announcement to the
newsgroup soon.

But in the meantime, some advance notice:

* All new win32all builds will be released from
http://www.activestate.com/Products/ActivePython/win32all.html.  This is
good for me - ActiveState actually have paid systems guys :-)
win32all-133.exe for 1.6b1 and 1.6 final can be found there.

* win32all-134.exe for the Python 2.x betas is not yet referenced at that
page, but is at
www.activestate.com/download/ActivePython/windows/win32all/win32all-134.exe

If you have ActivePython, you do _not_ need win32all.

Please let me know if you have any problems, or any other questions
regarding this...

Thanks,

Mark.


_______________________________________________
win32-reg-users maillist  -  win32-reg-users@pythonpros.com
http://mailman.pythonpros.com/mailman/listinfo/win32-reg-users



From tim_one@email.msn.com  Fri Sep  8 08:45:14 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 8 Sep 2000 03:45:14 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000901c0191f$b48d65e0$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENFHEAA.tim_one@email.msn.com>

[Barry Scott]
> Please don't release new kits with identical names/versions as old kits.

It *is* the 2.0b1 release; the only difference is that two of the 2.0b1 Lib
sub-directories that got left out by mistake got included.  This is
repairing an error in the release process, not in the code.

> How do you expect anyone to tell if they have the fix or not?

If they have Lib\xml, they've got the repaired release.  Else they've got
the flawed one.  They can also tell from Python's startup line:

C:\Python20>python
Python 2.0b1 (#4, Sep  7 2000, 02:40:55) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>>

The "#4" and the timestamp say that's the repaired release.  The flawed
release has "#3" there and an earlier timestamp.  If someone is still
incompetent to tell the difference <wink>, they can look at the installer
file size.

> Finding and fixing bugs show you care about quality.
> Stealth releases negate the benefit.

'Twasn't meant to be a "stealth release":  that's *another* screwup!  The
webmaster  didn't get the explanation onto the download page yet, for
reasons beyond his control.  Fred Drake *did* manage to update the
installer, and that was the most important part.  The explanation will show
up ... beats me, ask CNRI <wink>.




From mal@lemburg.com  Fri Sep  8 12:47:08 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 13:47:08 +0200
Subject: [Python-Dev] python -U fails
References: <200009071646.SAA07004@python.inrialpes.fr>
Message-ID: <39B8D1BC.9B46E005@lemburg.com>

Vladimir Marangozov wrote:
> 
> Seen on c.l.py (import site fails due to eval on an unicode string):
> 
> ~/python/Python-2.0b1>python -U
> 'import site' failed; use -v for traceback
> Python 2.0b1 (#2, Sep  7 2000, 12:59:53)
> [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> eval (u"1+2")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> TypeError: eval() argument 1 must be string or code object
> >>>
> 
> The offending eval is in os.py
> 
> Traceback (most recent call last):
>   File "./Lib/site.py", line 60, in ?
>     import sys, os
>   File "./Lib/os.py", line 331, in ?
>     if _exists("fork") and not _exists("spawnv") and _exists("execv"):
>   File "./Lib/os.py", line 325, in _exists
>     eval(name)

Note that many thing fail when Python is started with -U... that
switch was introduced to be able to get an idea of which parts
of the standard fail to work in a mixed string/Unicode environment.

In the above case, I guess the eval() could be replaced by some
other logic which does a try: except NameError: check.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep  8 13:02:46 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 14:02:46 +0200
Subject: [Python-Dev] hasattr() and Unicode strings
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <14775.62572.442732.589738@cj42289-a.reston1.va.home.com>
Message-ID: <39B8D566.4011E433@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
> Andrew Kuchling writes:
>  > Is this a bug or a feature?  I'd say bug; the Unicode should be
>  > coerced using the default ASCII encoding, and an exception raised if
>  > that isn't possible.
> 
>   I agree.
>   Marc-Andre, what do you think?

Sounds ok to me.

The only question is where to apply the patch:
1. in hasattr()
2. in PyObject_GetAttr()

I'd opt for using the second solution (it should allow string
and Unicode objects as attribute name). hasattr() would then
have to be changed to use the "O" parser marker.

What do you think ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep  8 13:09:03 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 14:09:03 +0200
Subject: [Python-Dev] hasattr() and Unicode strings
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <200009072132.QAA10047@cj20424-a.reston1.va.home.com>
Message-ID: <39B8D6DF.AA11746D@lemburg.com>

Guido van Rossum wrote:
> 
> > hasattr(), getattr(), and doubtless other built-in functions
> > don't accept Unicode strings at all:
> >
> > >>> import sys
> > >>> hasattr(sys, u'abc')
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > TypeError: hasattr, argument 2: expected string, unicode found
> >
> > Is this a bug or a feature?  I'd say bug; the Unicode should be
> > coerced using the default ASCII encoding, and an exception raised if
> > that isn't possible.
> 
> Agreed.
> 
> There are probably a bunch of things that need to be changed before
> thois works though; getattr() c.s. require a string, then call
> PyObject_GetAttr() which also checks for a string unless the object
> supports tp_getattro -- but that's only true for classes and
> instances.
> 
> Also, should we convert the string to 8-bit, or should we allow
> Unicode attribute names?

Attribute names will have to be 8-bit strings (at least in 2.0).

The reason here is that attributes are normally Python identifiers
which are plain ASCII and stored as 8-bit strings in the namespace
dictionaries, i.e. there's no way to add Unicode attribute names
other than by assigning directly to __dict__.

Note that keyword lookups already automatically convert Unicode
lookup strings to 8-bit using the default encoding. The same should
happen here, IMHO.
 
> It seems there's no easy fix -- better address this after 2.0 is
> released.

Why wait for 2.1 ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  8 13:24:49 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 14:24:49 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14769.15402.630192.4454@beluga.mojam.com> from "Skip Montanaro" at Sep 02, 2000 12:43:06 PM
Message-ID: <200009081224.OAA08999@python.inrialpes.fr>

Skip Montanaro wrote:
> 
>     Vlad> Skip Montanaro wrote:
>     >> 
>     >> If I read my (patched) version of gcmodule.c correctly, with the
>     >> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not
>     >> just the stuff with __del__ methods.
> 
>     Vlad> Yes. And you don't know which objects are collectable and which
>     Vlad> ones are not by this collector. That is, SAVEALL transforms the
>     Vlad> collector in a cycle detector. 
> 
> Which is precisely what I want.

All right! Since I haven't seen any votes, here's a +1. I'm willing
to handle Neil's patch at SF and let it in after some minor cleanup
that we'll discuss on the patch manager.

Any objections or other opinions on this?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From gward@mems-exchange.org  Fri Sep  8 13:59:30 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Fri, 8 Sep 2000 08:59:30 -0400
Subject: [Python-Dev] Setup script for Tools/compiler (etc.)
Message-ID: <20000908085930.A15918@ludwig.cnri.reston.va.us>

Jeremy --

it seems to me that there ought to be a setup script in Tools/compiler;
it may not be part of the standard library, but at least it ought to
support the standard installation scheme.

So here it is:

  #!/usr/bin/env python

  from distutils.core import setup

  setup(name = "compiler",
        version = "?",
        author = "Jeremy Hylton",
        author_email = "jeremy@beopen.com",
        packages = ["compiler"])

Do you want to check it in or shall I?  ;-)

Also -- and this is the reason I cc'd python-dev -- there are probably
other useful hacks in Tools that should have setup scripts.  I'm
thinking most prominently of IDLE; as near as I can tell, the only way
to install IDLE is to manually copy Tools/idle/*.py to
<prefix>/lib/python{1.6,2.0}/site-packages/idle and then write a little
shell script to launch it for you, eg:

  #!/bin/sh
  # GPW 2000/07/10 ("strongly inspired" by Red Hat's IDLE script ;-)
  exec /depot/plat/packages/python-2.0b1/bin/python \
    /depot/plat/packages/python-2.0b1/lib/python2.0/site-packages/idle/idle.py $*

This is, of course, completely BOGUS!  Users should not have to write
shell scripts just to install and run IDLE in a sensible way.  I would
be happy to write a setup script that makes it easy to install
Tools/idle as a "third-party" module distribution, complete with a
launch script, if there's interest.  Oh hell, maybe I'll do it
anyways... just howl if you don't think I should check it in.

        Greg
-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  8 14:47:08 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 15:47:08 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
Message-ID: <200009081347.PAA13686@python.inrialpes.fr>

Seems like people are very surprised to see "print >> None" defaulting
to "print >> sys.stderr". I must confess that now that I'm looking at
it and after reading the PEP, this change lacks some argumentation.

In Python, this form surely looks & feels like the Unix cat /dev/null,
that is, since None doesn't have a 'write' method, the print statement
is expected to either raise an exception or be specialized for None to mean
"the print statement has no effect". The deliberate choice of sys.stderr
is not obvious.

I understand that Guido wanted to say "print >> None, args == print args"
and simplify the script logic, but using None in this case seems like a
bad spelling <wink>.

I have certainly carefully avoided any debates on the issue as I don't
see myself using this feature any time soon, but when I see on c.l.py
reactions of surprise on weakly argumented/documented features and I
kind of feel the same way, I'd better ask for more arguments here myself.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From gward@mems-exchange.org  Fri Sep  8 15:14:26 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Fri, 8 Sep 2000 10:14:26 -0400
Subject: [Python-Dev] Distutil-ized IDLE
In-Reply-To: <20000908085930.A15918@ludwig.cnri.reston.va.us>; from gward@mems-exchange.org on Fri, Sep 08, 2000 at 08:59:30AM -0400
References: <20000908085930.A15918@ludwig.cnri.reston.va.us>
Message-ID: <20000908101426.A16014@ludwig.cnri.reston.va.us>

On 08 September 2000, I said:
> I would be happy to write a setup script that makes it easy to install
> Tools/idle as a "third-party" module distribution, complete with a
> launch script, if there's interest.  Oh hell, maybe I'll do it
> anyways... just howl if you don't think I should check it in.

OK, as threatened, I've written a setup script for IDLE.  (Specifically,
the version in Tools/idle in the Python 1.6 and 2.0 source
distributions.)  This installs IDLE into a pacakge "idle", which means
that the imports in idle.py have to change.  Rather than change idle.py,
I wrote a new script just called "idle"; this would replace idle.py and
be installed in <prefix>/bin (on Unix -- I think scripts installed by
the Distutils go to <prefix>/Scripts on Windows, which was a largely
arbitrary choice).

Anyways, here's the setup script:

  #!/usr/bin/env python

  import os
  from distutils.core import setup
  from distutils.command.install_data import install_data

  class IDLE_install_data (install_data):
      def finalize_options (self):
          if self.install_dir is None:
              install_lib = self.get_finalized_command('install_lib')
              self.install_dir = os.path.join(install_lib.install_dir, "idle")

  setup(name = "IDLE",
        version = "0.6",
        author = "Guido van Rossum",
        author_email = "guido@python.org",
        cmdclass = {'install_data': IDLE_install_data},
        packages = ['idle'],
        package_dir = {'idle': ''},
        scripts = ['idle'],
        data_files = ['config.txt', 'config-unix.txt', 'config-win.txt'])

And the changes I suggest to make IDLE smoothly installable:
  * remove idle.py 
  * add this setup.py and idle (which is just idle.py with the imports
    changed)
  * add some instructions on how to install and run IDLE somewhere

I just checked the CVS repository for the IDLE fork, and don't see a
setup.py there either -- so presumably the forked IDLE could benefit
from this as well (hence the cc: idle-dev@python.org).

        Greg
-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From mal@lemburg.com  Fri Sep  8 15:30:37 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 16:30:37 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
Message-ID: <39B8F80D.FF9CBAA9@lemburg.com>

Vladimir Marangozov wrote:
> 
> Seems like people are very surprised to see "print >> None" defaulting
> to "print >> sys.stderr". I must confess that now that I'm looking at
> it and after reading the PEP, this change lacks some argumentation.

According to the PEP it defaults to sys.stdout with the effect of
working just like the plain old "print" statement.

> In Python, this form surely looks & feels like the Unix cat /dev/null,
> that is, since None doesn't have a 'write' method, the print statement
> is expected to either raise an exception or be specialized for None to mean
> "the print statement has no effect". The deliberate choice of sys.stderr
> is not obvious.
> 
> I understand that Guido wanted to say "print >> None, args == print args"
> and simplify the script logic, but using None in this case seems like a
> bad spelling <wink>.
> 
> I have certainly carefully avoided any debates on the issue as I don't
> see myself using this feature any time soon, but when I see on c.l.py
> reactions of surprise on weakly argumented/documented features and I
> kind of feel the same way, I'd better ask for more arguments here myself.

+1

I'd opt for raising an exception instead of magically using
sys.stdout just to avoid two lines of explicit defaulting to
sys.stdout (see the example in the PEP).

BTW, I noted that the PEP pages on SF are not up-to-date. The
PEP 214 doesn't have the comments which Guido added in support
of the proposal.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fdrake@beopen.com  Fri Sep  8 15:49:59 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 8 Sep 2000 10:49:59 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39B8F80D.FF9CBAA9@lemburg.com>
References: <200009081347.PAA13686@python.inrialpes.fr>
 <39B8F80D.FF9CBAA9@lemburg.com>
Message-ID: <14776.64663.617863.830703@cj42289-a.reston1.va.home.com>

M.-A. Lemburg writes:
 > BTW, I noted that the PEP pages on SF are not up-to-date. The
 > PEP 214 doesn't have the comments which Guido added in support
 > of the proposal.

  I just pushed new copies up to SF using the CVS versions.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw@beopen.com  Fri Sep  8 16:00:46 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 11:00:46 -0400 (EDT)
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
References: <20000907224007.A959@beelzebub>
Message-ID: <14776.65310.93934.482038@anthem.concentric.net>

Greg,

The place to look for the search algorithm is in Modules/getpath.c.
There's an extensive comment at the top of the file outlining the
algorithm.

In fact $PREFIX and $EXEC_PREFIX are used, but only as fallbacks.

-Barry


From skip@mojam.com (Skip Montanaro)  Fri Sep  8 16:00:38 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 8 Sep 2000 10:00:38 -0500 (CDT)
Subject: [Python-Dev] Re: [Bug #113811] Python 2.0 beta 1 -- urllib.urlopen() fails
In-Reply-To: <003601c0194e$916012f0$74eb0b18@C322162A>
References: <14776.4972.263490.780783@beluga.mojam.com>
 <003601c0194e$916012f0$74eb0b18@C322162A>
Message-ID: <14776.65302.599381.987636@beluga.mojam.com>

    Bob> The one I used was http://dreamcast.ign.com/review_lists/a.html,
    Bob> but probably any would do since it's pretty ordinary, and the error
    Bob> occurs before making any contact with the destination.

    Bob> By the way, I forgot to mention that I'm running under Windows 2000.

Bob,

Thanks for the input.  I asked for a URL because I thought it unlikely
something common would trigger a bug.  After all, urllib.urlopen is probably
one of the most frequently used Internet-related calls in Python.

I can't reproduce this on my Linux system:

    % ./python
    Python 2.0b1 (#6, Sep  7 2000, 21:03:08) 
    [GCC 2.95.3 19991030 (prerelease)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> import urllib
    >>> f = urllib.urlopen("http://dreamcast.ign.com/review_lists/a.html")
    >>> data = f.read()
    >>> len(data)

Perhaps one of the folks on python-dev that run Windows of some flavor can
reproduce the problem.  Can you give me a simple session transcript like the
above that fails for you?  I will see about adding a test to the urllib
regression test.

-- 
Skip Montanaro (skip@mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/


From bwarsaw@beopen.com  Fri Sep  8 16:27:24 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 11:27:24 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
Message-ID: <14777.1372.641371.803126@anthem.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr> writes:

    VM> Seems like people are very surprised to see "print >> None"
    VM> defaulting to "print >> sys.stderr". I must confess that now
    VM> that I'm looking at it and after reading the PEP, this change
    VM> lacks some argumentation.

sys.stdout, not stderr.

I was pretty solidly -0 on this extension, but Guido wanted it (and
even supplied the necessary patch!).  It tastes too magical to me,
for exactly the same reasons you describe.

I hadn't thought of the None == /dev/null equivalence, but that's a
better idea, IMO.  In fact, perhaps the printing could be optimized
away when None is used (although you'd lose any side-effects there
might be).  This would actually make extended print more useful
because if you used

    print >> logfile

everywhere, you'd only need to start passing in logfile=None to
disable printing.  OTOH, it's not to hard to use

    class Devnull:
        def write(self, msg): pass
	

logfile=Devnull()

We'll have to wait until after the weekend for Guido's pronouncement.

-Barry




From Vladimir.Marangozov@inrialpes.fr  Fri Sep  8 17:23:13 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST)
Subject: [Python-Dev] 2.0 Optimization & speed
Message-ID: <200009081623.SAA14090@python.inrialpes.fr>

Continuing my impressions on the user's feedback to date: Donn Cave
& MAL are at least two voices I've heard about an overall slowdown
of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
this slowdown comes from and I believe that we have only vague guesses
about the possible causes: unicode database, more opcodes in ceval, etc.

I wonder whether we are in a position to try improving Python's
performance with some `wise quickies' in a next beta. But this raises
a more fundamental question on what is our margin for manoeuvres at this
point. This in turn implies that we need some classification of the
proposed optimizations to date.

Perhaps it would be good to create a dedicated Web page for this, but
in the meantime, let's try to build a list/table of the ideas that have
been proposed so far. This would be useful anyway, and the list would be
filled as time goes.

Trying to push this initiative one step further, here's a very rough start
on the top of my head:

Category 1: Algorithmic Changes

These are the most promising, since they don't relate to pure technicalities
but imply potential improvements with some evidence.
I'd put in this category:

- the dynamic dictionary/string specialization by Fred Drake
  (this is already in). Can this be applied in other areas? If so, where?

- the Python-specific mallocs. Actually, I'm pretty sure that a lot of
  `overhead' is due to the standard mallocs which happen to be expensive
  for Python in both space and time. Python is very malloc-intensive.
  The only reason I've postponed my obmalloc patch is that I still haven't
  provided an interface which allows evaluating it's impact on the
  mem size consumption. It gives noticeable speedup on all machines, so
  it accounts as a good candidate w.r.t. performance.

- ??? (maybe some parts of MAL's optimizations could go here)

Category 2: Technical / Code optimizations

This category includes all (more or less) controversial proposals, like

- my latest lookdict optimizations (a typical controversial `quickie')

- opcode folding & reordering. Actually, I'm unclear on why Guido
  postponed the reordering idea; it has received positive feedback
  and all theoretical reasoning and practical experiments showed that
  this "could" help, although without any guarantees. Nobody reported
  slowdowns, though. This is typically a change without real dangers.

- kill the async / pending calls logic. (Tim, what happened with this
  proposal?)

- compact the unicodedata database, which is expected to reduce the
  mem footprint, maybe improve startup time, etc. (ongoing)

- proposal about optimizing the "file hits" on startup.

- others?

If there are potential `wise quickies', meybe it's good to refresh
them now and experiment a bit more before the final release?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From mwh21@cam.ac.uk  Fri Sep  8 17:39:58 2000
From: mwh21@cam.ac.uk (Michael Hudson)
Date: Fri, 8 Sep 2000 17:39:58 +0100 (BST)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <Pine.LNX.4.10.10009081736070.29215-100000@localhost.localdomain>

It's 5:30 and I'm still at work (eek!) so for now I'll just say:

On Fri, 8 Sep 2000, Vladimir Marangozov wrote:
[...]
> Category 2: Technical / Code optimizations
[...]
> - others?

Killing off SET_LINENO?

Cheers,
M.




From mal@lemburg.com  Fri Sep  8 17:49:58 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 18:49:58 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <39B918B6.659C6C88@lemburg.com>

Vladimir Marangozov wrote:
> 
> Continuing my impressions on the user's feedback to date: Donn Cave
> & MAL are at least two voices I've heard about an overall slowdown
> of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
> this slowdown comes from and I believe that we have only vague guesses
> about the possible causes: unicode database, more opcodes in ceval, etc.
> 
> I wonder whether we are in a position to try improving Python's
> performance with some `wise quickies' in a next beta.

I don't think it's worth trying to optimize anything in the
beta series: optimizations need to be well tested and therefore
should go into 2.1.

Perhaps we ought to make these optimizations the big new issue
for 2.1...

It would fit well with the move to a more pluggable interpreter
design.

> But this raises
> a more fundamental question on what is our margin for manoeuvres at this
> point. This in turn implies that we need some classification of the
> proposed optimizations to date.
> 
> Perhaps it would be good to create a dedicated Web page for this, but
> in the meantime, let's try to build a list/table of the ideas that have
> been proposed so far. This would be useful anyway, and the list would be
> filled as time goes.
> 
> Trying to push this initiative one step further, here's a very rough start
> on the top of my head:
> 
> Category 1: Algorithmic Changes
> 
> These are the most promising, since they don't relate to pure technicalities
> but imply potential improvements with some evidence.
> I'd put in this category:
> 
> - the dynamic dictionary/string specialization by Fred Drake
>   (this is already in). Can this be applied in other areas? If so, where?
>
> - the Python-specific mallocs. Actually, I'm pretty sure that a lot of
>   `overhead' is due to the standard mallocs which happen to be expensive
>   for Python in both space and time. Python is very malloc-intensive.
>   The only reason I've postponed my obmalloc patch is that I still haven't
>   provided an interface which allows evaluating it's impact on the
>   mem size consumption. It gives noticeable speedup on all machines, so
>   it accounts as a good candidate w.r.t. performance.
> 
> - ??? (maybe some parts of MAL's optimizations could go here)

One addition would be my small dict patch: the dictionary
tables for small dictionaries are added to the dictionary
object itself rather than allocating a separate buffer.
This is useful for small dictionaries (8-16 entries) and
causes a speedup due to the fact that most instance dictionaries
are in fact of that size.
 
> Category 2: Technical / Code optimizations
> 
> This category includes all (more or less) controversial proposals, like
> 
> - my latest lookdict optimizations (a typical controversial `quickie')
> 
> - opcode folding & reordering. Actually, I'm unclear on why Guido
>   postponed the reordering idea; it has received positive feedback
>   and all theoretical reasoning and practical experiments showed that
>   this "could" help, although without any guarantees. Nobody reported
>   slowdowns, though. This is typically a change without real dangers.

Rather than folding opcodes, I'd suggest breaking the huge
switch in two or three parts so that the most commonly used
opcodes fit nicely into the CPU cache.
 
> - kill the async / pending calls logic. (Tim, what happened with this
>   proposal?)

In my patched version of 1.5 I have moved this logic into the
second part of the ceval switch: as a result, signals are only
queried if a less common opcode is used.

> - compact the unicodedata database, which is expected to reduce the
>   mem footprint, maybe improve startup time, etc. (ongoing)

This was postponed to 2.1. It doesn't have any impact on
performance... not even on memory footprint since it is only
loaded on demand by the OS.
 
> - proposal about optimizing the "file hits" on startup.

A major startup speedup can be had by using a smarter
file lookup mechanism. 

Another possibility is freeze()ing the whole standard lib 
and putting it into a shared module. I'm not sure how well
this works with packages, but it did work very well for
1.5.2 (see the mxCGIPython project).
 
> - others?
> 
> If there are potential `wise quickies', meybe it's good to refresh
> them now and experiment a bit more before the final release?

No, let's leave this for 2.1.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From cgw@fnal.gov  Fri Sep  8 18:18:01 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 12:18:01 -0500 (CDT)
Subject: [Python-Dev] obsolete urlopen.py in CVS
Message-ID: <14777.8009.543626.966203@buffalo.fnal.gov>

Another obsolete file has magically appeared in my local CVS
workspace.  I am assuming that I should continue to report these sorts
of problems. If not, just tell me and I'll stop with these annoying
messages.  Is there a mail address for the CVS admin so I don't have
to bug the whole list?

Lib$ cvs status urlopen.py                                             
===================================================================
File: urlopen.py        Status: Up-to-date

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/urlopen.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)



From Fredrik Lundh" <effbot@telia.com  Fri Sep  8 18:38:07 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 8 Sep 2000 19:38:07 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr> <39B918B6.659C6C88@lemburg.com>
Message-ID: <00e401c019bb$904084a0$766940d5@hagrid>

mal wrote:
> > - compact the unicodedata database, which is expected to reduce the
> >   mem footprint, maybe improve startup time, etc. (ongoing)
> 
> This was postponed to 2.1. It doesn't have any impact on
> performance...

sure has, for anyone distributing python applications.  we're
talking more than 1 meg of extra binary bloat (over 2.5 megs
of extra source code...)

the 2.0 release PEP says:

    Compression of Unicode database - Fredrik Lundh
      SF Patch 100899
      At least for 2.0b1.  May be included in 2.0 as a bug fix.

(the API is frozen, and we have an extensive test suite...)

</F>



From fdrake@beopen.com  Fri Sep  8 18:29:54 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 8 Sep 2000 13:29:54 -0400 (EDT)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <00e401c019bb$904084a0$766940d5@hagrid>
References: <200009081623.SAA14090@python.inrialpes.fr>
 <39B918B6.659C6C88@lemburg.com>
 <00e401c019bb$904084a0$766940d5@hagrid>
Message-ID: <14777.8722.902222.452584@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > (the API is frozen, and we have an extensive test suite...)

  What are the reasons for the hold-up?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Fri Sep  8 18:41:59 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 8 Sep 2000 19:41:59 +0200
Subject: [Python-Dev] obsolete urlopen.py in CVS
References: <14777.8009.543626.966203@buffalo.fnal.gov>
Message-ID: <00ea01c019bc$1929f4e0$766940d5@hagrid>

Charles G Waldman wrote:
> Another obsolete file has magically appeared in my local CVS
> workspace.  I am assuming that I should continue to report these sorts
> of problems. If not, just tell me and I'll stop with these annoying
> messages.

what exactly are you doing to check things out?

note that CVS may check things out from the Attic under
certain circumstances, like if you do "cvs update -D".  see
the CVS FAQ for more info.

</F>



From mal@lemburg.com  Fri Sep  8 18:43:40 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 19:43:40 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr> <39B918B6.659C6C88@lemburg.com> <00e401c019bb$904084a0$766940d5@hagrid>
Message-ID: <39B9254C.5209AC81@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > > - compact the unicodedata database, which is expected to reduce the
> > >   mem footprint, maybe improve startup time, etc. (ongoing)
> >
> > This was postponed to 2.1. It doesn't have any impact on
> > performance...
> 
> sure has, for anyone distributing python applications.  we're
> talking more than 1 meg of extra binary bloat (over 2.5 megs
> of extra source code...)

Yes, but not there's no impact on speed and that's what Valdimir
was referring to.
 
> the 2.0 release PEP says:
> 
>     Compression of Unicode database - Fredrik Lundh
>       SF Patch 100899
>       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> 
> (the API is frozen, and we have an extensive test suite...)

Note that I want to redesign the Unicode database and ctype
access for 2.1: all databases should be accessible through
the unicodedatabase module which will be rewritten as Python
module. 

The real data will then go into auxilliary C modules
as static C data which are managed by the Python module
and loaded on demand. This means that what now is unicodedatabase
will then move into some _unicodedb module.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From cgw@fnal.gov  Fri Sep  8 19:13:48 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 13:13:48 -0500 (CDT)
Subject: [Python-Dev] obsolete urlopen.py in CVS
In-Reply-To: <00ea01c019bc$1929f4e0$766940d5@hagrid>
References: <14777.8009.543626.966203@buffalo.fnal.gov>
 <00ea01c019bc$1929f4e0$766940d5@hagrid>
Message-ID: <14777.11356.106477.440474@buffalo.fnal.gov>

Fredrik Lundh writes:

 > what exactly are you doing to check things out?

cvs update -dAP

 > note that CVS may check things out from the Attic under
 > certain circumstances, like if you do "cvs update -D".  see
 > the CVS FAQ for more info.

No, I am not using the '-D' flag.





From Vladimir.Marangozov@inrialpes.fr  Fri Sep  8 20:27:06 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 21:27:06 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <14777.1372.641371.803126@anthem.concentric.net> from "Barry A. Warsaw" at Sep 08, 2000 11:27:24 AM
Message-ID: <200009081927.VAA14502@python.inrialpes.fr>

Barry A. Warsaw wrote:
> 
> 
> >>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr> writes:
> 
>     VM> Seems like people are very surprised to see "print >> None"
>     VM> defaulting to "print >> sys.stderr". I must confess that now
>     VM> that I'm looking at it and after reading the PEP, this change
>     VM> lacks some argumentation.
> 
> sys.stdout, not stderr.

typo

> 
> I was pretty solidly -0 on this extension, but Guido wanted it (and
> even supplied the necessary patch!).  It tastes too magical to me,
> for exactly the same reasons you describe.
> 
> I hadn't thought of the None == /dev/null equivalence, but that's a
> better idea, IMO.  In fact, perhaps the printing could be optimized
> away when None is used (although you'd lose any side-effects there
> might be).  This would actually make extended print more useful
> because if you used
> 
>     print >> logfile
> 
> everywhere, you'd only need to start passing in logfile=None to
> disable printing.  OTOH, it's not to hard to use
> 
>     class Devnull:
>         def write(self, msg): pass
> 	
> 
> logfile=Devnull()

In no way different than using a function, say output() or an instance
of a Stream class that can poke at will on file objects, instead of
extended print <0.5 wink>. This is a matter of personal taste, after all.

> 
> We'll have to wait until after the weekend for Guido's pronouncement.
> 

Sure. Note that I don't feel like I'll loose my sleep if this doesn't
change. However, it looks like the None business goes a bit too far here.
In the past, Guido used to label such things "creeping featurism", but
times change... :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From bwarsaw@beopen.com  Fri Sep  8 20:36:01 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 15:36:01 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <14777.1372.641371.803126@anthem.concentric.net>
 <200009081927.VAA14502@python.inrialpes.fr>
Message-ID: <14777.16289.587240.778501@anthem.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr> writes:

    VM> Sure. Note that I don't feel like I'll loose my sleep if this
    VM> doesn't change. However, it looks like the None business goes
    VM> a bit too far here.  In the past, Guido used to label such
    VM> things "creeping featurism", but times change... :-)

Agreed.


From mal@lemburg.com  Fri Sep  8 21:26:45 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 22:26:45 +0200
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
References: <200009081702.LAA08275@localhost.localdomain>
 <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
Message-ID: <39B94B85.BFD16019@lemburg.com>

As you may have heard, there are problems with the stock
XML support and the PyXML project due to both trying to
use the xml package namespace (see the xml-sig for details).

To provide more flexibility to the third-party tools in such
a situation, I think it would be worthwhile moving the
site-packages/ entry in sys.path in front of the lib/python2.0/
entry.

That way a third party tool can override the standard lib's
package or module or take appropriate action to reintegrate
the standard lib's package namespace into an extended one.

What do you think ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Vladimir.Marangozov@inrialpes.fr  Fri Sep  8 21:48:23 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 22:48:23 +0200 (CEST)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <39B9254C.5209AC81@lemburg.com> from "M.-A. Lemburg" at Sep 08, 2000 07:43:40 PM
Message-ID: <200009082048.WAA14671@python.inrialpes.fr>

M.-A. Lemburg wrote:
> 
> Fredrik Lundh wrote:
> > 
> > mal wrote:
> > > > - compact the unicodedata database, which is expected to reduce the
> > > >   mem footprint, maybe improve startup time, etc. (ongoing)
> > >
> > > This was postponed to 2.1. It doesn't have any impact on
> > > performance...
> > 
> > sure has, for anyone distributing python applications.  we're
> > talking more than 1 meg of extra binary bloat (over 2.5 megs
> > of extra source code...)
> 
> Yes, but not there's no impact on speed and that's what Valdimir
> was referring to.

Hey Marc-Andre, what encoding are you using for printing my name? <wink>

>  
> > the 2.0 release PEP says:
> > 
> >     Compression of Unicode database - Fredrik Lundh
> >       SF Patch 100899
> >       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> > 
> > (the API is frozen, and we have an extensive test suite...)
> 
> Note that I want to redesign the Unicode database and ctype
> access for 2.1: all databases should be accessible through
> the unicodedatabase module which will be rewritten as Python
> module. 
> 
> The real data will then go into auxilliary C modules
> as static C data which are managed by the Python module
> and loaded on demand. This means that what now is unicodedatabase
> will then move into some _unicodedb module.

Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.
My argument doesn't hold, but Fredrik has a point and I don't see how
your future changes would invalidate these efforts. If the size of
the distribution can be reduced, it should be reduced! Did you know
that telecom companies measure the quality of their technologies on
a per bit basis? <0.1 wink> Every bit costs money, and that's why
Van Jacobson packet-header compression has been invented and is
massively used. Whole armies of researchers are currently trying to
compensate the irresponsible bloatware that people of the higher
layers are imposing on them <wink>. Careful!

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From jeremy@beopen.com  Fri Sep  8 21:54:33 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 8 Sep 2000 16:54:33 -0400 (EDT)
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B94B85.BFD16019@lemburg.com>
References: <200009081702.LAA08275@localhost.localdomain>
 <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com>
 <14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
 <39B94B85.BFD16019@lemburg.com>
Message-ID: <14777.21001.363279.137646@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal@lemburg.com> writes:

  MAL> To provide more flexibility to the third-party tools in such a
  MAL> situation, I think it would be worthwhile moving the
  MAL> site-packages/ entry in sys.path in front of the lib/python2.0/
  MAL> entry.

  MAL> That way a third party tool can override the standard lib's
  MAL> package or module or take appropriate action to reintegrate the
  MAL> standard lib's package namespace into an extended one.

  MAL> What do you think ?

I think it is a bad idea to encourage third party tools to override
the standard library.  We call it the standard library for a reason!

It invites confusion and headaches to read a bit of code that says
"import pickle" and have its meaning depend on what oddball packages
someone has installed on the system.  Good bye, portability!

If you want to use a third-party package that provides the same
interface as a standard library, it seems much clearn to say so
explicitly.

I would agree that there is an interesting design problem here.  I
think the problem is support interfaces, where an interface allows me
to write code that can run with any implementation of that interface.
I don't think hacking sys.path is a good solution.

Jeremy


From akuchlin@mems-exchange.org  Fri Sep  8 21:52:02 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 8 Sep 2000 16:52:02 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <14777.21001.363279.137646@bitdiddle.concentric.net>; from jeremy@beopen.com on Fri, Sep 08, 2000 at 04:54:33PM -0400
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net>
Message-ID: <20000908165202.F12994@kronos.cnri.reston.va.us>

On Fri, Sep 08, 2000 at 04:54:33PM -0400, Jeremy Hylton wrote:
>It invites confusion and headaches to read a bit of code that says
>"import pickle" and have its meaning depend on what oddball packages
>someone has installed on the system.  Good bye, portability!

Amen.  But then, I was against adding xml/ in the first place...

--amk


From mal@lemburg.com  Fri Sep  8 21:53:32 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 22:53:32 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009082048.WAA14671@python.inrialpes.fr>
Message-ID: <39B951CC.3C0AE801@lemburg.com>

Vladimir Marangozov wrote:
> 
> M.-A. Lemburg wrote:
> >
> > Fredrik Lundh wrote:
> > >
> > > mal wrote:
> > > > > - compact the unicodedata database, which is expected to reduce the
> > > > >   mem footprint, maybe improve startup time, etc. (ongoing)
> > > >
> > > > This was postponed to 2.1. It doesn't have any impact on
> > > > performance...
> > >
> > > sure has, for anyone distributing python applications.  we're
> > > talking more than 1 meg of extra binary bloat (over 2.5 megs
> > > of extra source code...)
> >
> > Yes, but not there's no impact on speed and that's what Valdimir
> > was referring to.
> 
> Hey Marc-Andre, what encoding are you using for printing my name? <wink>

Yeah, I know... the codec swaps character on an irregular basis
-- gotta fix that ;-)
 
> >
> > > the 2.0 release PEP says:
> > >
> > >     Compression of Unicode database - Fredrik Lundh
> > >       SF Patch 100899
> > >       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> > >
> > > (the API is frozen, and we have an extensive test suite...)
> >
> > Note that I want to redesign the Unicode database and ctype
> > access for 2.1: all databases should be accessible through
> > the unicodedatabase module which will be rewritten as Python
> > module.
> >
> > The real data will then go into auxilliary C modules
> > as static C data which are managed by the Python module
> > and loaded on demand. This means that what now is unicodedatabase
> > will then move into some _unicodedb module.
> 
> Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.

Oh, I didn't try to reduce Fredrik's efforts at all. To the
contrary: I'm still looking forward to his melted down version
of the database and the ctype tables.

The point I wanted to make was that all this can well be
done for 2.1. There are many more urgent things which need
to get settled in the beta cycle. Size optimizations are
not necessarily one of them, IMHO.

> My argument doesn't hold, but Fredrik has a point and I don't see how
> your future changes would invalidate these efforts. If the size of
> the distribution can be reduced, it should be reduced! Did you know
> that telecom companies measure the quality of their technologies on
> a per bit basis? <0.1 wink> Every bit costs money, and that's why
> Van Jacobson packet-header compression has been invented and is
> massively used. Whole armies of researchers are currently trying to
> compensate the irresponsible bloatware that people of the higher
> layers are imposing on them <wink>. Careful!

True, but why the hurry ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Fri Sep  8 21:58:31 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 8 Sep 2000 16:58:31 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <20000908165202.F12994@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEACHFAA.tim_one@email.msn.com>

[Andrew Kuchling]
> Amen.  But then, I was against adding xml/ in the first place...

So *you're* the guy who sabotaged the Windows installer!  Should have
guessed -- you almost got away with it, too <wink>.




From mal@lemburg.com  Fri Sep  8 22:31:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 23:31:06 +0200
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
References: <200009081702.LAA08275@localhost.localdomain>
 <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com>
 <14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
 <39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net>
Message-ID: <39B95A9A.D5A01F53@lemburg.com>

Jeremy Hylton wrote:
> 
> >>>>> "MAL" == M -A Lemburg <mal@lemburg.com> writes:
> 
>   MAL> To provide more flexibility to the third-party tools in such a
>   MAL> situation, I think it would be worthwhile moving the
>   MAL> site-packages/ entry in sys.path in front of the lib/python2.0/
>   MAL> entry.
> 
>   MAL> That way a third party tool can override the standard lib's
>   MAL> package or module or take appropriate action to reintegrate the
>   MAL> standard lib's package namespace into an extended one.
> 
>   MAL> What do you think ?
> 
> I think it is a bad idea to encourage third party tools to override
> the standard library.  We call it the standard library for a reason!
> 
> It invites confusion and headaches to read a bit of code that says
> "import pickle" and have its meaning depend on what oddball packages
> someone has installed on the system.  Good bye, portability!

Ok... so we'll need a more flexible solution.
 
> If you want to use a third-party package that provides the same
> interface as a standard library, it seems much clearn to say so
> explicitly.
> 
> I would agree that there is an interesting design problem here.  I
> think the problem is support interfaces, where an interface allows me
> to write code that can run with any implementation of that interface.
> I don't think hacking sys.path is a good solution.

No, the problem is different: there is currently on way to
automatically add subpackages to an existing package which is
not aware of these new subpackages, i.e. say you have a
package xml in the standard lib and somebody wants to install
a new subpackage wml.

The only way to do this is by putting it into the xml
package directory (bad!) or by telling the user to
run 

	import xml_wml

first which then does the

	import xml, wml
	xml.wml = wml

to complete the installation... there has to be a more elegant
way.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep  8 22:48:18 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 23:48:18 +0200
Subject: [Python-Dev] PyObject_SetAttr/GetAttr() and non-string attribute names
Message-ID: <39B95EA2.7D98AA4C@lemburg.com>

While hacking along on a patch to let set|get|hasattr() accept
Unicode attribute names, I found that all current tp_getattro
and tp_setattro implementations (classes, instances, methods) expect
to find string objects as argument and don't even check for this.

Is this documented somewhere ? Should we make the existing
implementations aware of other objects as well ? Should we
fix the de-facto definition to string attribute names ?

My current solution does the latter. It's available as patch
on SF.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jack@oratrix.nl  Fri Sep  8 23:55:01 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Sat, 09 Sep 2000 00:55:01 +0200
Subject: [Python-Dev] Need some hands to debug MacPython installer
Message-ID: <20000908225506.92145D71FF@oratrix.oratrix.nl>

Folks,
I need some people to test the MacPython 2.0b1 installer. It is almost 
complete, only things like the readme file and some of the
documentation (on building and such) remains to be done. At least: as
far as I know. If someone (or someones) could try
ftp://ftp.cwi.nl/pub/jack/python/mac/PythonMac20preb1Installer.bin 
and tell me whether it works that would be much appreciated.
One thing to note is that if you've been building 2.0b1 MacPythons
from the CVS repository you'll have to remove your preference file
first (no such problem with older prefs files).

All feedback is welcome, of course, but I'm especially interested in
hearing which things I've forgotten (if people could check that
expected new modules and such are indeed there), and which bits of the 
documentation (in Mac:Demo) needs massaging. Oh, and bugs of course,
in the unlike event of there being any:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++


From gstein@lyra.org  Sat Sep  9 00:08:55 2000
From: gstein@lyra.org (Greg Stein)
Date: Fri, 8 Sep 2000 16:08:55 -0700
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B95A9A.D5A01F53@lemburg.com>; from mal@lemburg.com on Fri, Sep 08, 2000 at 11:31:06PM +0200
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net> <39B95A9A.D5A01F53@lemburg.com>
Message-ID: <20000908160855.B16566@lyra.org>

On Fri, Sep 08, 2000 at 11:31:06PM +0200, M.-A. Lemburg wrote:
> Jeremy Hylton wrote:
>...
> > If you want to use a third-party package that provides the same
> > interface as a standard library, it seems much clearn to say so
> > explicitly.
> > 
> > I would agree that there is an interesting design problem here.  I
> > think the problem is support interfaces, where an interface allows me
> > to write code that can run with any implementation of that interface.
> > I don't think hacking sys.path is a good solution.
> 
> No, the problem is different: there is currently on way to
> automatically add subpackages to an existing package which is
> not aware of these new subpackages, i.e. say you have a
> package xml in the standard lib and somebody wants to install
> a new subpackage wml.
> 
> The only way to do this is by putting it into the xml
> package directory (bad!) or by telling the user to
> run 
> 
> 	import xml_wml
> 
> first which then does the
> 
> 	import xml, wml
> 	xml.wml = wml
> 
> to complete the installation... there has to be a more elegant
> way.

There is. I proposed it a while back. Fred chose to use a different
mechanism, despite my recommendations to the contrary. *shrug*

The "current" mechanism require the PyXML package to completely override the
entire xml package in the Python distribution. This has certain, um,
problems... :-)

Another approach would be to use the __path__ symbol. I dislike that for
various import design reasons, but it would solve one of the issues Fred had
with my recommendation (e.g. needing to pre-import subpackages).

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From cgw@fnal.gov  Sat Sep  9 00:41:12 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 18:41:12 -0500 (CDT)
Subject: [Python-Dev] Need some hands to debug MacPython installer
In-Reply-To: <20000908225506.92145D71FF@oratrix.oratrix.nl>
References: <20000908225506.92145D71FF@oratrix.oratrix.nl>
Message-ID: <14777.31000.382351.905418@buffalo.fnal.gov>

Jack Jansen writes:
 > Folks,
 > I need some people to test the MacPython 2.0b1 installer. 

I am not a Mac user but I saw your posting and my wife has a Mac so I
decided to give it a try. 

When I ran the installer, a lot of the text referred to "Python 1.6"
despite this being a 2.0 installer.

As the install completed I got a message:  

 The application "Configure Python" could not be opened because
 "OTInetClientLib -- OTInetGetSecondaryAddresses" could not be found

After that, if I try to bring up PythonIDE or PythonInterpreter by
clicking on the 16-ton icons, I get the same message about
OTInetGetSecondaryAddresses.  So I'm not able to run Python at all
right now on this Mac.


From sdm7g@virginia.edu  Sat Sep  9 01:23:45 2000
From: sdm7g@virginia.edu (Steven D. Majewski)
Date: Fri, 8 Sep 2000 20:23:45 -0400 (EDT)
Subject: [Python-Dev] Re: [Pythonmac-SIG] Need some hands to debug MacPython installer
In-Reply-To: <20000908225506.92145D71FF@oratrix.oratrix.nl>
Message-ID: <Pine.A32.3.90.1000908201956.15033A-100000@elvis.med.Virginia.EDU>

On Sat, 9 Sep 2000, Jack Jansen wrote:

> All feedback is welcome, of course, but I'm especially interested in
> hearing which things I've forgotten (if people could check that
> expected new modules and such are indeed there), and which bits of the 
> documentation (in Mac:Demo) needs massaging. Oh, and bugs of course,
> in the unlike event of there being any:-)

Install went smoothly. I haven't been following the latest developments,
so I'm not sure if this is SUPPOSED to work yet or not, but: 


Python 2.0b1 (#64, Sep  8 2000, 23:37:06)  [CW PPC w/GUSI2 w/THREADS]
Copyright (c) 2000 BeOpen.com.
All Rights Reserved.

 [...] 

>>> import thread
>>> import threading
Traceback (most recent call last):
  File "<input>", line 1, in ?
  File "Work:Python 2.0preb1:Lib:threading.py", line 538, in ?
    _MainThread()
  File "Work:Python 2.0preb1:Lib:threading.py", line 465, in __init__
    import atexit
ImportError: No module named atexit


(I'll try exercising some old scripts and see what else happens.)

---|  Steven D. Majewski   (804-982-0831)  <sdm7g@Virginia.EDU>  |---
---|  Department of Molecular Physiology and Biological Physics  |---
---|  University of Virginia             Health Sciences Center  |---
---|  P.O. Box 10011            Charlottesville, VA  22906-0011  |---
		"All operating systems want to be unix, 
		 All programming languages want to be lisp." 



From barry@scottb.demon.co.uk  Sat Sep  9 11:40:04 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Sat, 9 Sep 2000 11:40:04 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENFHEAA.tim_one@email.msn.com>
Message-ID: <000001c01a4a$5066f280$060210ac@private>

I understand what you did and why. What I think is wrong is to use the
same name for the filename of the windows installer, source tar etc.

Each kit has a unique version but you have not reflected it in the
filenames. Only the filename is visible in a browser.

Why can't you add the 3 vs. 4 mark to the file name?

I cannot see the time stamp from a browser without downloading the file.

Won't you be getting bug reports against 2.0b1 and not know which one
the user has unless that realise to tell them that the #n is important?

You don't have any quick way to check that the webmaster on CNRI has change
the file to your newer version without downloading it.

I'm sure there are other tasks that user and developers will find made harder.

	BArry



From tim_one@email.msn.com  Sat Sep  9 12:18:21 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 9 Sep 2000 07:18:21 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000001c01a4a$5066f280$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBJHFAA.tim_one@email.msn.com>

Sorry, but I can't do anything more about this now.  The notice was supposed
to go up on the website at the same instant as the new installer, but the
people who can actually put the notice up *still* haven't done it.

In the future I'll certainly change the filename, should this ever happen
again (and, no, I can't change the filename from here either).

In the meantime, you don't want to hear this, but you're certainly free to
change the filenames on your end <wink -- but nobody yet has reported an
actual real-life confusion related to this, so while it may suck in theory,
practice appears much more forgiving>.

BTW, I didn't understand the complaint about "same name for the filename of
the windows installer, source tar etc.".  The *only* file I had replaced was

    BeOpen-Python-2.0b1.exe

I guess Fred replaced the PDF-format doc downloads too?  IIRC, those were
totally broken.  Don't think anything else was changed.

About bug reports, the only report of any possible relevance will be "I
tried to load the xml package under Windows 2.0b1, but got an
ImportError" -- and the cause of that will be obvious.  Also remember that
this is a beta release:  by definition, anyone using it at all a few weeks
from now is entirely on their own.

> -----Original Message-----
> From: Barry Scott [mailto:barry@scottb.demon.co.uk]
> Sent: Saturday, September 09, 2000 6:40 AM
> To: Tim Peters; python-dev@python.org
> Subject: RE: [Python-Dev] xml missing in Windows installer?
>
>
> I understand what you did and why. What I think is wrong is to use the
> same name for the filename of the windows installer, source tar etc.
>
> Each kit has a unique version but you have not reflected it in the
> filenames. Only the filename is visible in a browser.
>
> Why can't you add the 3 vs. 4 mark to the file name?
>
> I cannot see the time stamp from a browser without downloading the file.
>
> Won't you be getting bug reports against 2.0b1 and not know which one
> the user has unless that realise to tell them that the #n is important?
>
> You don't have any quick way to check that the webmaster on CNRI
> has change
> the file to your newer version without downloading it.
>
> I'm sure there are other tasks that user and developers will find
> made harder.
>
> 	BArry




From MarkH@ActiveState.com  Sat Sep  9 16:36:54 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Sun, 10 Sep 2000 02:36:54 +1100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000001c01a4a$5066f280$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEJHDIAA.MarkH@ActiveState.com>

> I understand what you did and why. What I think is wrong is to use the
> same name for the filename of the windows installer, source tar etc.

Seeing as everyone (both of you <wink>) is hassling Tim, let me also stick
up for the actions.  This is a beta release, and as Tim said, is not any
sort of fix, other than what is installed.  The symptoms are obvious.
Sheesh - most people will hardly be aware xml support is _supposed_ to be
there :-)

I can see the other POV, but I don't think this is worth the administrative
overhead of a newly branded release.

Feeling-chatty, ly.

Mark.



From jack@oratrix.nl  Sat Sep  9 23:53:50 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Sun, 10 Sep 2000 00:53:50 +0200
Subject: [Python-Dev] Re: [Pythonmac-SIG] Need some hands to debug MacPython installer
In-Reply-To: Message by "Steven D. Majewski" <sdm7g@virginia.edu> ,
 Fri, 8 Sep 2000 20:23:45 -0400 (EDT) , <Pine.A32.3.90.1000908201956.15033A-100000@elvis.med.Virginia.EDU>
Message-ID: <20000909225355.381DDD71FF@oratrix.oratrix.nl>

Oops, indeed some of the new modules were inadvertantly excluded. I'll 
create a new installer tomorrow (which should also contain the
documentation and such).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 


From barry@scottb.demon.co.uk  Sun Sep 10 22:38:34 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Sun, 10 Sep 2000 22:38:34 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
Message-ID: <000201c01b6f$78594510$060210ac@private>

I just checked the announcement on www.pythonlabs.com that its not mentioned.

		Barry



From barry@scottb.demon.co.uk  Sun Sep 10 22:35:33 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Sun, 10 Sep 2000 22:35:33 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEJHDIAA.MarkH@ActiveState.com>
Message-ID: <000101c01b6f$0cc94250$060210ac@private>

I guess you had not seen Tim's reply. I read his reply as understanding
the problem and saying that things will be done better for future kits.

I glad that you will have unique names for each of the beta releases.
This will allow beta testers to accurately report which beta kit they
see a problem in. That in turn will make fixing bug reports from the
beta simpler for the maintainers.

	BArry



From tim_one@email.msn.com  Sun Sep 10 23:21:41 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 10 Sep 2000 18:21:41 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000101c01b6f$0cc94250$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEEHHFAA.tim_one@email.msn.com>

[Barry Scott, presumably to Mark Hammond]
> I guess you had not seen Tim's reply.

Na, I think he did.  I bet he just thought you were being unbearably anal
about a non-problem in practice and wanted to annoy you back <wink>.

> I read his reply as understanding the problem and saying that things
> will be done better for future kits.

Oh yes.  We tried to take a shortcut, and it backfired.  I won't let that
happen again, and you were right to point it out (once <wink>).  BTW, the
notice *is* on the web site now, but depending on which browser you're
using, it may appear in a font so small it can't even been read!  The worst
part of moving to BeOpen.com so far was getting hooked up with professional
web designers who think HTML *should* be used for more than just giant
monolothic plain-text dumps <0.9 wink>; we can't change their elaborate
pages without extreme pain.

but-like-they-say-it's-the-sizzle-not-the-steak-ly y'rs  - tim




From tim_one@email.msn.com  Sun Sep 10 23:22:06 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 10 Sep 2000 18:22:06 -0400
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <000201c01b6f$78594510$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>

> I just checked the announcement on www.pythonlabs.com that its 
> not mentioned.

All bugs get reported on SourceForge.




From gward@mems-exchange.org  Mon Sep 11 14:53:53 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 11 Sep 2000 09:53:53 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B94B85.BFD16019@lemburg.com>; from mal@lemburg.com on Fri, Sep 08, 2000 at 10:26:45PM +0200
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com>
Message-ID: <20000911095352.A24415@ludwig.cnri.reston.va.us>

On 08 September 2000, M.-A. Lemburg said:
> To provide more flexibility to the third-party tools in such
> a situation, I think it would be worthwhile moving the
> site-packages/ entry in sys.path in front of the lib/python2.0/
> entry.
> 
> That way a third party tool can override the standard lib's
> package or module or take appropriate action to reintegrate
> the standard lib's package namespace into an extended one.

+0 -- I actually *like* the ability to upgrade/override bits of the
standard library; this is occasionally essential, particularly when
there are modules (or even namespaces) in the standard library that have
lives (release cycles) of their own independent of Python and its
library.

There's already a note in the Distutils README.txt about how to upgrade
the Distutils under Python 1.6/2.0; it boils down to, "rename
lib/python/2.0/distutils and then install the new version".  Are PyXML,
asyncore, cPickle, etc. going to need similar qualifications in its
README?  Are RPMs (and other smart installers) of these modules going to
have to include code to do the renaming for you?

Ugh.  It's a proven fact that 73% of users don't read README files[1],
and I have a strong suspicion that the reliability of an RPM (or
whatever) decreases in proportion to the amount of
pre/post-install/uninstall code that it carries around with it.  I think
reordering sys.path would allow people to painlessly upgrade bits of the
standard library, and the benefits of this outweigh the "but then it's
not standard anymore!" objection.

        Greg

[1] And 65% of statistics are completely made up!


From cgw@fnal.gov  Mon Sep 11 19:55:09 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Mon, 11 Sep 2000 13:55:09 -0500 (CDT)
Subject: [Python-Dev] find_recursionlimit.py vs. libpthread vs. linux
Message-ID: <14781.10893.273438.446648@buffalo.fnal.gov>

It has been noted by people doing testing on Linux systems that

ulimit -s unlimited
python Misc/find_recursionlimit.py

will run for a *long* time if you have built Python without threads, but
will die after about 2400/2500 iterations if you have built with
threads, regardless of the "ulimit" setting.

I had thought this was evidence of a bug in Pthreads.  In fact
(although we still have other reasons to suspect Pthread bugs),
the behavior is easily explained.  The function "pthread_initialize"
in pthread.c contains this very lovely code:

  /* Play with the stack size limit to make sure that no stack ever grows
     beyond STACK_SIZE minus two pages (one page for the thread descriptor
     immediately beyond, and one page to act as a guard page). */
  getrlimit(RLIMIT_STACK, &limit);
  max_stack = STACK_SIZE - 2 * __getpagesize();
  if (limit.rlim_cur > max_stack) {
    limit.rlim_cur = max_stack;
    setrlimit(RLIMIT_STACK, &limit);
  }

In "internals.h", STACK_SIZE is #defined to (2 * 1024 * 1024)

So whenever you're using threads, you have an effective rlimit of 2MB
for stack, regardless of what you may *think* you have set via 
"ulimit -s"

One more mystery explained!





From gward@mems-exchange.org  Mon Sep 11 22:13:00 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 11 Sep 2000 17:13:00 -0400
Subject: [Python-Dev] Off-topic: common employee IP agreements?
Message-ID: <20000911171259.A26210@ludwig.cnri.reston.va.us>

Hi all --

sorry for the off-topic post.  I'd like to get a calibration reading
from other members of the open source community on an issue that's
causing some controversy around here: what sort of employee IP
agreements do other software/open source/Python/Linux/Internet-related
companies require their employees to sign?

I'm especially curious about companies that are prominent in the open
source world, like Red Hat, ActiveState, VA Linux, or SuSE; and big
companies that are involved in open source, like IBM or HP.  I'm also
interested in what universities, both around the world and in the U.S.,
impose on faculty, students, and staff.  If you have knowledge -- or
direct experience -- with any sort of employee IP agreement, though, I'm
curious to hear about it.  If possible, I'd like to get my hands on the
exact document your employer uses -- precedent is everything!  ;-)

Thanks -- and please reply to me directly; no need to pollute python-dev
with more off-topic posts.

        Greg
-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From guido@beopen.com  Tue Sep 12 00:10:31 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:10:31 -0500
Subject: [Python-Dev] obsolete urlopen.py in CVS
In-Reply-To: Your message of "Fri, 08 Sep 2000 13:13:48 EST."
 <14777.11356.106477.440474@buffalo.fnal.gov>
References: <14777.8009.543626.966203@buffalo.fnal.gov> <00ea01c019bc$1929f4e0$766940d5@hagrid>
 <14777.11356.106477.440474@buffalo.fnal.gov>
Message-ID: <200009112310.SAA08374@cj20424-a.reston1.va.home.com>

> Fredrik Lundh writes:
> 
>  > what exactly are you doing to check things out?

[Charles]
> cvs update -dAP
> 
>  > note that CVS may check things out from the Attic under
>  > certain circumstances, like if you do "cvs update -D".  see
>  > the CVS FAQ for more info.
> 
> No, I am not using the '-D' flag.

I would drop the -A flag -- what's it used for?

I've done the same dance for urlopen.py and it seems to have
disappeared now.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Sep 12 00:14:38 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:14:38 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Fri, 08 Sep 2000 15:47:08 +0200."
 <200009081347.PAA13686@python.inrialpes.fr>
References: <200009081347.PAA13686@python.inrialpes.fr>
Message-ID: <200009112314.SAA08409@cj20424-a.reston1.va.home.com>

[Vladimir]
> Seems like people are very surprised to see "print >> None" defaulting
> to "print >> sys.stderr". I must confess that now that I'm looking at
> it and after reading the PEP, this change lacks some argumentation.
> 
> In Python, this form surely looks & feels like the Unix cat /dev/null,
> that is, since None doesn't have a 'write' method, the print statement
> is expected to either raise an exception or be specialized for None to mean
> "the print statement has no effect". The deliberate choice of sys.stderr
> is not obvious.
> 
> I understand that Guido wanted to say "print >> None, args == print args"
> and simplify the script logic, but using None in this case seems like a
> bad spelling <wink>.
> 
> I have certainly carefully avoided any debates on the issue as I don't
> see myself using this feature any time soon, but when I see on c.l.py
> reactions of surprise on weakly argumented/documented features and I
> kind of feel the same way, I'd better ask for more arguments here myself.

(I read the followup and forgive you sys.stderr; didn't want to follow
up to the rest of the thread because it doesn't add much.)

After reading the little bit of discussion here, I still think
defaulting None to sys.stdout is a good idea.

Don't think of it as

  print >>None, args

Think of it as

  def func(file=None):
    print >>file, args

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Mon Sep 11 23:24:13 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 11 Sep 2000 18:24:13 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112314.SAA08409@cj20424-a.reston1.va.home.com>
References: <200009081347.PAA13686@python.inrialpes.fr>
 <200009112314.SAA08409@cj20424-a.reston1.va.home.com>
Message-ID: <14781.23437.165189.328323@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

  GvR> Don't think of it as

  GvR>   print >>None, args

  GvR> Think of it as

  GvR>   def func(file=None):
  GvR>     print >>file, args

Huh?  Don't you mean think of it as:

def func(file=None):
    if file is None:
       import sys
       print >>sys.stdout, args
    else:
	print >>file, args

At least, I think that's why I find the use of None confusing.  I find
it hard to make a strong association between None and sys.stdout.  In
fact, when I was typing this message, I wrote it as sys.stderr and
only discovered my error upon re-reading the initial message.

Jeremy


From bwarsaw@beopen.com  Mon Sep 11 23:28:31 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 11 Sep 2000 18:28:31 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
 <200009112314.SAA08409@cj20424-a.reston1.va.home.com>
 <14781.23437.165189.328323@bitdiddle.concentric.net>
Message-ID: <14781.23695.934627.439238@anthem.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy@beopen.com> writes:

    JH> At least, I think that's why I find the use of None confusing.
    JH> I find it hard to make a strong association between None and
    JH> sys.stdout.  In fact, when I was typing this message, I wrote
    JH> it as sys.stderr and only discovered my error upon re-reading
    JH> the initial message.

I think of it more like Vladimir does: "print >>None" should be
analogous to catting to /dev/null.

-Barry


From guido@beopen.com  Tue Sep 12 00:31:35 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:31:35 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Mon, 11 Sep 2000 18:24:13 -0400."
 <14781.23437.165189.328323@bitdiddle.concentric.net>
References: <200009081347.PAA13686@python.inrialpes.fr> <200009112314.SAA08409@cj20424-a.reston1.va.home.com>
 <14781.23437.165189.328323@bitdiddle.concentric.net>
Message-ID: <200009112331.SAA08558@cj20424-a.reston1.va.home.com>

> >>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:
> 
>   GvR> Don't think of it as
> 
>   GvR>   print >>None, args
> 
>   GvR> Think of it as
> 
>   GvR>   def func(file=None):
>   GvR>     print >>file, args
> 
> Huh?  Don't you mean think of it as:
> 
> def func(file=None):
>     if file is None:
>        import sys
>        print >>sys.stdout, args
>     else:
> 	print >>file, args

I meant what I said.  I meant that you shouldn't think of examples
like the first one (which looks strange, just like "".join(list) does)
but examples like the second one, which (in my eye) make for more
readable and more maintainable code.

> At least, I think that's why I find the use of None confusing.  I find
> it hard to make a strong association between None and sys.stdout.  In
> fact, when I was typing this message, I wrote it as sys.stderr and
> only discovered my error upon re-reading the initial message.

You don't have to make a strong association with sys.stdout.  When the
file expression is None, the whole ">>file, " part disappears!

Note that the writeln() function, proposed by many, would have the
same behavior:

  def writeln(*args, file=None):
      if file is None:
          file = sys.stdout
      ...write args...

I know that's not legal syntax, but that's the closest
approximation.  This is intended to let you specify file=<some file>
and have the default be sys.stdout, but passing an explicit value of
None has the same effect as leaving it out.  This idiom is used in
lots of places!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Sep 12 00:35:20 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:35:20 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Mon, 11 Sep 2000 18:28:31 -0400."
 <14781.23695.934627.439238@anthem.concentric.net>
References: <200009081347.PAA13686@python.inrialpes.fr> <200009112314.SAA08409@cj20424-a.reston1.va.home.com> <14781.23437.165189.328323@bitdiddle.concentric.net>
 <14781.23695.934627.439238@anthem.concentric.net>
Message-ID: <200009112335.SAA08609@cj20424-a.reston1.va.home.com>

>     JH> At least, I think that's why I find the use of None confusing.
>     JH> I find it hard to make a strong association between None and
>     JH> sys.stdout.  In fact, when I was typing this message, I wrote
>     JH> it as sys.stderr and only discovered my error upon re-reading
>     JH> the initial message.
> 
> I think of it more like Vladimir does: "print >>None" should be
> analogous to catting to /dev/null.

Strong -1 on that.  You can do that with any number of other
approaches.

If, as a result of a misplaced None, output appears at the wrong place
by accident, it's easy to figure out why.  If it disappears
completely, it's a much bigger mystery because you may start
suspecting lots of other places.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Tue Sep 12 00:22:46 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 01:22:46 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112331.SAA08558@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 11, 2000 06:31:35 PM
Message-ID: <200009112322.BAA29633@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> >   GvR> Don't think of it as
> > 
> >   GvR>   print >>None, args
> > 
> >   GvR> Think of it as
> > 
> >   GvR>   def func(file=None):
> >   GvR>     print >>file, args

I understand that you want me to think this way. But that's not my
intuitive thinking. I would have written your example like this:

def func(file=sys.stdout):
    print >> file, args

This is a clearer, compared to None which is not a file.

> ...  This is intended to let you specify file=<some file>
> and have the default be sys.stdout, but passing an explicit value of
> None has the same effect as leaving it out.  This idiom is used in
> lots of places!

Exactly.
However my expectation would be to leave out the whole print statement.
I think that any specialization of None is mysterious and would be hard
to teach. From this POV, I agree with MAL that raising an exception is
the cleanest and the simplest way to do it. Any specialization of my
thought here is perceived as a burden.

However, if such specialization is desired, I'm certainly closer to
/dev/null than sys.stdout. As long as one starts redirecting output,
I believe that one already has enough knowledge about files, and in
particular about stdin, stdout and stderr. None in the sense of /dev/null
is not so far from that. It is a simple concept. But this is already
"advanced knowledge" about redirecting output on purpose.

So as long as one uses extended print, she's already an advanced user.
From this perspective print >> F means "redirect the output of
print to the file <F>". If F is not really a file, the F object must
have a write() method. This is the second approximation which is easy
to grasp.  But if F is not a file and does not have a write() method,
then what? Then, you say that it could be None which equals sys.stdout
(a file!). This is contradictory with the whole logic above and frankly,
this concept specialization is not intuitive and difficult to grasp.
(despite the "nice, leaving out" property that you seem to value, which
in this context doesn't worth much in my eyes because I think that
it results in more enigmatic code - it's not explicit, but rather
magically implicit).

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From tim_one@email.msn.com  Tue Sep 12 02:27:10 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 11 Sep 2000 21:27:10 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> ...
> As long as one starts redirecting output, I believe that one already
> has enough knowledge about files, and in particular about stdin,
> stdout and stderr. None in the sense of /dev/null is not so far from
> that.  It is a simple concept. But this is already "advanced
> knowledge" about redirecting output on purpose.

This is so Unix-centric, though; e.g., native windows users have only the
dimmest knowledge of stderr, and almost none of /dev/null.  Which ties in
to:

> So as long as one uses extended print, she's already an advanced user.

Nope!  "Now how did I get this to print to a file instead?" is one of the
faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
a file-like object?  what?! etc").

This is one of those cases where Guido is right, but for reasons nobody can
explain <0.8 wink>.

sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim




From paul@prescod.net  Tue Sep 12 06:34:10 2000
From: paul@prescod.net (Paul Prescod)
Date: Mon, 11 Sep 2000 22:34:10 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <39BDC052.A9FEDE80@prescod.net>

Vladimir Marangozov wrote:
> 
>...
> 
> def func(file=sys.stdout):
>     print >> file, args
> 
> This is a clearer, compared to None which is not a file.

I've gotta say that I agree with you on all issues. If I saw that
file=None stuff in code in another programming language I would expect
it meant send the output nowhere. People who want sys.stdout can get it.
Special cases aren't special enough to break the rules!
-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html


From Fredrik Lundh" <effbot@telia.com  Tue Sep 12 08:10:53 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 09:10:53 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <003001c01c88$aad09420$766940d5@hagrid>

Vladimir wrote:
> I understand that you want me to think this way. But that's not my
> intuitive thinking. I would have written your example like this:
> 
> def func(file=sys.stdout):
>     print >> file, args
> 
> This is a clearer, compared to None which is not a file.

Sigh.  You code doesn't work.  Quoting the PEP, from the section
that discusses why passing None is the same thing as passing no
file at all:

    "Note: defaulting the file argument to sys.stdout at compile time
    is wrong, because it doesn't work right when the caller assigns to
    sys.stdout and then uses tables() without specifying the file."

I was sceptical at first, but the more I see of your counter-arguments,
the more I support Guido here.  As he pointed out, None usually means
"pretend I didn't pass this argument" in Python.  No difference here.

+1 on keeping print as it's implemented (None means default).
-1 on making None behave like a NullFile.

</F>



From Vladimir.Marangozov@inrialpes.fr  Tue Sep 12 15:11:14 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 16:11:14 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com> from "Tim Peters" at Sep 11, 2000 09:27:10 PM
Message-ID: <200009121411.QAA30848@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov]
> > ...
> > As long as one starts redirecting output, I believe that one already
> > has enough knowledge about files, and in particular about stdin,
> > stdout and stderr. None in the sense of /dev/null is not so far from
> > that.  It is a simple concept. But this is already "advanced
> > knowledge" about redirecting output on purpose.
> 
> This is so Unix-centric, though; e.g., native windows users have only the
> dimmest knowledge of stderr, and almost none of /dev/null.

Ok, forget about /dev/null. It was just a spelling of "print to None"
which has a meaning even in spoken English.


> Which ties in to:
> 
> > So as long as one uses extended print, she's already an advanced user.
> 
> Nope!  "Now how did I get this to print to a file instead?" is one of the
> faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
> past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
> a file-like object?  what?! etc").

Look, this is getting silly. You can't align the experienced users' level
of knowledge to the one of newbies. What I'm trying to make clear here is
that you're not disturbing newbies, you're disturbing experienced users
and teachers who are supposed to transmit their knowledge to these newbies.

FWIW, I am one of these teachers and I have had enough classes in this
domain to trust my experience and my judgement on the students' logic
more than Guido's and your's perceptions taken together about *this*
feature in particlar. If you want real feedback from newbies, don't take
c.l.py as the reference -- you'd better go to the nearest school or
University and start teaching.  (how's that as a reply to your attempts
to make me think one way or another or trust abbreviations <0.1 wink>)

As long as you have embarked in the output redirection business, you
have done so explicitely, because you're supposed to understand what it
means and how it works. This is "The Next Level" in knowledge, implying
that whenever you use extended print *explicitely*, you're supposed to
provide explicitely the target of the output.

Reverting that back with None, by saying that "print >> None == print"
is illogical, because you've already engaged in this advanced concept.
Rolling back your explicit decision about dealing with redirected output
with an explicit None (yes, you must provide it explicitely to fall back
to the opriginal behavior) is the wrong path of reasoning.  If you don't
want to redirect output, don't use extended print in the first place.
And if you want to achieve the effect of "simple" print, you should pass
sys.stdout.

I really don't see the point of passing explicitely None instead of
passing sys.stdout, once you've made your decision about redirecting
output. And in this regard, both Guido and you have not provided any
arguments that would make me think that you're probably right.
I understand very well your POV, you don't seem to understand mine.

And let me add to that the following summary: the whole extended
print idea is about convenience. Convenience for those that know
what file redirection is. Not for newbies. You can't argue too much
about extended print as an intuitive concept for newbies. The present
change disturbs experienced users (the >> syntax aside) and you get
signals about that from them, because the current behavior does not
comply with any existing concept as far as file redirection is concerned.
However, since these guys are experienced and knowledgable, they already
understand this game quite well. So what you get is just "Oh really? OK,
this is messy" from the chatty ones and everybody moves on.  The others
just don't care, but they not necessarily agree.

I don't care either, but fact is that I've filled two screens of text
explaining you that you're playing with 2 different knowledge levels.
You shouldn't try to reduce the upper level to the lower one, just because
you think it is more Pythonic for newbies. You'd better take the opposite
direction and raise the newbie stadard to what happens to be a very well
known concept in the area of computer programming, and in CS in gerenal.

To provoke you a bit more, I'll tell you that I see no conceptual difference
between
             print >> None, args

and
             print >> 0, args -or- print >> [], args  -or- print >> "", args

(if you prefer, you can replace (), "", [], etc. with a var name, which can be
 assigned these values)

That is, I don't see a conceptual difference between None and any object
which evaluates to false. However, the latter are not allowed. Funny,
isn't it.  What makes None so special? <wink>

Now, the only argument I got is the one Fredrik has quoted from the PEP,
dealing with passing the default file as a parameter. I'll focus briefly
on it.

[Fredrik]

> [me]
> > def func(file=sys.stdout):
> >     print >> file, args
> > 
> > This is a clearer, compared to None which is not a file.
>
> Sigh.  You code doesn't work.  Quoting the PEP, from the section
> that discusses why passing None is the same thing as passing no
> file at all:
> 
>     "Note: defaulting the file argument to sys.stdout at compile time
>     is wrong, because it doesn't work right when the caller assigns to
>     sys.stdout and then uses tables() without specifying the file."

Of course that it doesn't work if you assign to sys.stdout. But hey,
if you assign to sys.stdout, you know what 'sys' is, what 'sys.stdout' is,
and you know basically everything about std files and output. Don't you?

Anyway, this argument is a flawed, because the above is in no way
different than the issues raised when you define a default argument
which is a list, dict, tuple, etc. Compile time evaluation of default args
is a completely different discussion and extended print has (almost)
nothing to do with that. Guido has made this (strange) association between
two different subjects, which, btw, I perceive as an additional burden.

It is far better to deal with the value of the default argument within
the body of the function: this way, there are no misunderstandings.
None has all the symptoms of a hackish shortcut here.

> 
> This is one of those cases where Guido is right, but for reasons nobody can
> explain <0.8 wink>.

I'm sorry. I think that this is one of those rare cases where he is wrong.
His path of reasoning is less straigtforward, and I can't adopt it. And
it seems like I'm not alone. If you ever see a columnist talking about
Python's features and extended print (mentioning print >> None as a good
thing), please let me know about it.

> 
> sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim
> 

I would have preferred arguments. The PEP and your responses lack them
which is another sign about this feature.


stop-troubadouring-about-blind-BDFL-compliance-in-public'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Fredrik Lundh" <effbot@telia.com  Tue Sep 12 15:48:11 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 16:48:11 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009121411.QAA30848@python.inrialpes.fr>
Message-ID: <004801c01cc8$7ed99700$766940d5@hagrid>

> > Sigh.  You code doesn't work.  Quoting the PEP, from the section
> > that discusses why passing None is the same thing as passing no
> > file at all:
> > 
> >     "Note: defaulting the file argument to sys.stdout at compile time
> >     is wrong, because it doesn't work right when the caller assigns to
> >     sys.stdout and then uses tables() without specifying the file."
> 
> Of course that it doesn't work if you assign to sys.stdout. But hey,
> if you assign to sys.stdout, you know what 'sys' is, what 'sys.stdout' is,
> and you know basically everything about std files and output. Don't you?

no.  and since you're so much smarter than everyone else,
you should be able to figure out why.

followups to /dev/null, please.

</F>



From Vladimir.Marangozov@inrialpes.fr  Tue Sep 12 18:12:04 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 19:12:04 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <004801c01cc8$7ed99700$766940d5@hagrid> from "Fredrik Lundh" at Sep 12, 2000 04:48:11 PM
Message-ID: <200009121712.TAA31347@python.inrialpes.fr>

Fredrik Lundh wrote:
> 
> no.  and since you're so much smarter than everyone else,
> you should be able to figure out why.
> 
> followups to /dev/null, please.

pass


print >> pep-0214.txt, next_argument_if_not_None 'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From tismer@appliedbiometrics.com  Tue Sep 12 17:35:13 2000
From: tismer@appliedbiometrics.com (Christian Tismer)
Date: Tue, 12 Sep 2000 19:35:13 +0300
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr> <003001c01c88$aad09420$766940d5@hagrid>
Message-ID: <39BE5B41.16143E76@appliedbiometrics.com>


Fredrik Lundh wrote:
> 
> Vladimir wrote:
> > I understand that you want me to think this way. But that's not my
> > intuitive thinking. I would have written your example like this:
> >
> > def func(file=sys.stdout):
> >     print >> file, args
> >
> > This is a clearer, compared to None which is not a file.

This is not clearer.
Instead, it is presetting a parameter
with a mutable object - bad practice!

> Sigh.  You code doesn't work.  Quoting the PEP, from the section
> that discusses why passing None is the same thing as passing no
> file at all:
> 
>     "Note: defaulting the file argument to sys.stdout at compile time
>     is wrong, because it doesn't work right when the caller assigns to
>     sys.stdout and then uses tables() without specifying the file."
> 
> I was sceptical at first, but the more I see of your counter-arguments,
> the more I support Guido here.  As he pointed out, None usually means
> "pretend I didn't pass this argument" in Python.  No difference here.
> 
> +1 on keeping print as it's implemented (None means default).
> -1 on making None behave like a NullFile.

Seconded!

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com


From nascheme@enme.ucalgary.ca  Tue Sep 12 19:03:55 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Tue, 12 Sep 2000 12:03:55 -0600
Subject: [Python-Dev] PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER>; from Brent Fulgham on Tue, Sep 12, 2000 at 10:40:36AM -0700
References: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER>
Message-ID: <20000912120355.A2457@keymaster.enme.ucalgary.ca>

You probably want to address the python-dev mailing list.  I have CCed
this message in the hope that some of the more experienced developers
can help.  The PyWX website is at: http://pywx.idyll.org/.

On Tue, Sep 12, 2000 at 10:40:36AM -0700, Brent Fulgham wrote:
> We've run across some problems with the Python's internal threading
> design, and its handling of module loading.
> 
> The AOLserver plugin spawns new Python interpreter threads to
> service new HTTP connections.  Each thread is theoretically its
> own interpreter, and should have its own namespace, set of loaded
> packages, etc.
> 
> This is largely true, but we run across trouble with the way
> the individual threads handle 'argv' variables and current
> working directory.
> 
> CGI scripts typically pass data as variables to the script
> (as argv).  These (unfortunately) are changed globally across
> all Python interpreter threads, which can cause problems....
> 
> In addition, the current working directory is not unique
> among independent Python interpreters.  So if a script changes
> its directory to something, all other running scripts (in
> unique python interpreter threads) now have their cwd set to
> this directory.
> 
> So we have to address these issues at some point...  Any hope
> that something like this could be fixed in 2.0?

Are you using separate interpreters or one interpreter with multiple
threads?  It sounds like the latter.  If you use the latter than
definately things like the process address space and the current working
directory are shared across the threads.  I don't think I understand
your design.  Can you explain the architecture of PyWX?

  Neil


From brent.fulgham@xpsystems.com  Tue Sep 12 19:18:03 2000
From: brent.fulgham@xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 11:18:03 -0700
Subject: [Python-Dev] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>

> Are you using separate interpreters or one interpreter with multiple
> threads?  It sounds like the latter.  If you use the latter than
> definately things like the process address space and the 
> current working directory are shared across the threads.  I don't 
> think I understand your design.  Can you explain the architecture
> of PyWX?
> 

There are some documents on the website that give a bit more detail,
but in a nutshell we were using the Python interpreter thread concept
(Py_InterpreterNew, etc.) to allow 'independent' interpreters to
service HTTP requests in the server.

We are basically running afoul of the problems with the interpreter
isolation, as documented in the various Python embed docs.

"""Because sub-interpreters (and the main interpreter) are part of
the same process, the insulation between them isn't perfect -- for 
example, using low-level file operations like os.close() they can
(accidentally or maliciously) affect each other's open files. 
Because of the way extensions are shared between (sub-)interpreters,
some extensions may not work properly; this is especially likely
when the extension makes use of (static) global variables, or when
the extension manipulates its module's dictionary after its 
initialization"""

So we are basically stuck.  We can't link against Python multiple
times, so our only avenue to provide multiple interpreter instances
is to use the "Py_InterpreterNew" call and hope for the best.

Any hope for better interpreter isolation in 2.0? (2.1?)

-Brent



From Vladimir.Marangozov@inrialpes.fr  Tue Sep 12 19:51:21 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 20:51:21 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39BE5B41.16143E76@appliedbiometrics.com> from "Christian Tismer" at Sep 12, 2000 07:35:13 PM
Message-ID: <200009121851.UAA31622@python.inrialpes.fr>

Christian Tismer wrote:
> 
> > Vladimir wrote:
> > > I understand that you want me to think this way. But that's not my
> > > intuitive thinking. I would have written your example like this:
> > >
> > > def func(file=sys.stdout):
> > >     print >> file, args
> > >
> > > This is a clearer, compared to None which is not a file.
> 
> This is not clearer.
> Instead, it is presetting a parameter
> with a mutable object - bad practice!

I think I mentioned that default function args and explicit output
streams are two disjoint issues. In the case of extended print,
half of us perceive that as a mix of concepts unrelated to Python,
the other half sees them as natural for specifying default behavior
in Python. The real challenge about print >> None is that the latter
half would need to explain to the first one (including newcomers with
various backgrounds) that this is natural thinking in Python. I am
sceptical about the results, as long as one has to explain that this
is done on purpose to someone who thinks that this is a mix of concepts.

A naive illustration to the above is that "man fprintf" does not say
that when the stream is NULL, fprintf behaves like printf. Indeed,
fprintf(NULL, args) dumps core. There are two distinct functions for
different things. Either you care and you use fprintf (print >> ),
either you don't care and you use printf (print). Not both. If you
think you can do both in one shot, elaborate on that magic in the PEP.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From cgw@fnal.gov  Tue Sep 12 19:47:31 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Tue, 12 Sep 2000 13:47:31 -0500 (CDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
Message-ID: <14782.31299.800325.803340@buffalo.fnal.gov>

Python 1.5.2 (#3, Feb 11 2000, 15:30:14)  [GCC 2.7.2.3.f.1] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import rexec
>>> r = rexec.RExec()
>>> r.r_exec("import re")
>>> 

Python 2.0b1 (#2, Sep  8 2000, 12:10:17) 
[GCC 2.95.2 19991024 (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import rexec
>>> r=rexec.RExec()
>>> r.r_exec("import re")

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/rexec.py", line 253, in r_exec
    exec code in m.__dict__
  File "<string>", line 1, in ?
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/re.py", line 28, in ?
    from sre import *
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/sre.py", line 19, in ?
    import sre_compile
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/sre_compile.py", line 11, in ?
    import _sre
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 439, in find_head_package
    raise ImportError, "No module named " + qname
ImportError: No module named _sre

Of course I can work around this by doing:

>>> r.ok_builtin_modules += '_sre',
>>> r.r_exec("import re")          

But I really shouldn't have to do this, right?  _sre is supposed to be
a low-level implementation detail.  I think I should still be able to 
"import re" in an restricted environment without having to be aware of
_sre.


From Fredrik Lundh" <effbot@telia.com  Tue Sep 12 20:12:20 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:12:20 +0200
Subject: [Python-Dev] urllib problems under 2.0
Message-ID: <005e01c01ced$6bb19180$766940d5@hagrid>

the proxy code in 2.0b1's new urllib is broken on my box.

here's the troublemaker:

                proxyServer = str(_winreg.QueryValueEx(internetSettings,
                                                       'ProxyServer')[0])
                if ';' in proxyServer:        # Per-protocol settings
                    for p in proxyServer.split(';'):
                        protocol, address = p.split('=')
                        proxies[protocol] = '%s://%s' % (protocol, address)
                else:        # Use one setting for all protocols
                    proxies['http'] = 'http://%s' % proxyServer
                    proxies['ftp'] = 'ftp://%s' % proxyServer

now, on my box, the proxyServer string is "https=127.0.0.1:1080"
(an encryption proxy used by my bank), so the above code happily
creates the following proxy dictionary:

proxy = {
    "http": "http://https=127.0.0.1:1080"
    "ftp": "http://https=127.0.0.1:1080"
}

which, of course, results in a "host not found" no matter what URL
I pass to urllib...

:::

a simple fix would be to change the initial test to:

                if "=" in proxyServer:

does anyone have a better idea, or should I check this one
in right away?

</F>



From titus@caltech.edu  Tue Sep 12 20:14:12 2000
From: titus@caltech.edu (Titus Brown)
Date: Tue, 12 Sep 2000 12:14:12 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>; from brent.fulgham@xpsystems.com on Tue, Sep 12, 2000 at 11:18:03AM -0700
References: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>
Message-ID: <20000912121412.B6850@cns.caltech.edu>

-> > Are you using separate interpreters or one interpreter with multiple
-> > threads?  It sounds like the latter.  If you use the latter than
-> > definately things like the process address space and the 
-> > current working directory are shared across the threads.  I don't 
-> > think I understand your design.  Can you explain the architecture
-> > of PyWX?
-> > 
-> 
-> """Because sub-interpreters (and the main interpreter) are part of
-> the same process, the insulation between them isn't perfect -- for 
-> example, using low-level file operations like os.close() they can
-> (accidentally or maliciously) affect each other's open files. 
-> Because of the way extensions are shared between (sub-)interpreters,
-> some extensions may not work properly; this is especially likely
-> when the extension makes use of (static) global variables, or when
-> the extension manipulates its module's dictionary after its 
-> initialization"""
-> 
-> So we are basically stuck.  We can't link against Python multiple
-> times, so our only avenue to provide multiple interpreter instances
-> is to use the "Py_InterpreterNew" call and hope for the best.
-> 
-> Any hope for better interpreter isolation in 2.0? (2.1?)

Perhaps a better question is: is there any way to get around these problems
without moving from a threaded model (which we like) to a process model?

Many of the problems we're running into because of this lack of interpreter
isolation are due to the UNIX threading model, as I see it.  For example,
the low-level file operation interference, cwd problems, and environment
variable problems are all caused by UNIX's determination to share this stuff
across threads.  I don't see any way of changing this without causing far
more problems than we fix.

cheers,
--titus


From Fredrik Lundh" <effbot@telia.com  Tue Sep 12 20:34:58 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:34:58 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009121851.UAA31622@python.inrialpes.fr>
Message-ID: <006e01c01cf0$921a4da0$766940d5@hagrid>

vladimir wrote:
> In the case of extended print, half of us perceive that as a mix of
> concepts unrelated to Python, the other half sees them as natural
> for specifying default behavior in Python.

Sigh.  None doesn't mean "default", it means "doesn't exist"
"nothing" "ingenting" "nada" "none" etc.

"def foo(): return" uses None to indicate that there was no
return value.

"map(None, seq)" uses None to indicate that there are really
no function to map things through.

"import" stores None in sys.modules to indicate that certain
package components doesn't exist.

"print >>None, value" uses None to indicate that there is
really no redirection -- in other words, the value is printed
in the usual location.

</None>



From Fredrik Lundh" <effbot@telia.com  Tue Sep 12 20:40:04 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:40:04 +0200
Subject: [Python-Dev] XML runtime errors?
Message-ID: <009601c01cf1$467458e0$766940d5@hagrid>

stoopid question: why the heck is xmllib using
"RuntimeError" to flag XML syntax errors?

raise RuntimeError, 'Syntax error at line %d: %s' % (self.lineno, message)

what's wrong with "SyntaxError"?

</F>



From Vladimir.Marangozov@inrialpes.fr  Tue Sep 12 20:43:32 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 21:43:32 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <006e01c01cf0$921a4da0$766940d5@hagrid> from "Fredrik Lundh" at Sep 12, 2000 09:34:58 PM
Message-ID: <200009121943.VAA31771@python.inrialpes.fr>

Fredrik Lundh wrote:
> 
> vladimir wrote:
> > In the case of extended print, half of us perceive that as a mix of
> > concepts unrelated to Python, the other half sees them as natural
> > for specifying default behavior in Python.
> 
> Sigh.  None doesn't mean "default", it means "doesn't exist"
> "nothing" "ingenting" "nada" "none" etc.
> 
> "def foo(): return" uses None to indicate that there was no
> return value.
> 
> "map(None, seq)" uses None to indicate that there are really
> no function to map things through.
> 
> "import" stores None in sys.modules to indicate that certain
> package components doesn't exist.
> 
> "print >>None, value" uses None to indicate that there is
> really no redirection -- in other words, the value is printed
> in the usual location.

PEP that without the import example (it's obfuscated). If you can add
more of them, you'll save yourself time answering questions. I couldn't
have done it, because I still belong to my half <wink>.

hard-to-make-progress-but-constructivism-wins-in-the-end'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From guido@beopen.com  Tue Sep 12 22:46:32 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:46:32 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Tue, 12 Sep 2000 12:14:12 MST."
 <20000912121412.B6850@cns.caltech.edu>
References: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>
 <20000912121412.B6850@cns.caltech.edu>
Message-ID: <200009122146.QAA01374@cj20424-a.reston1.va.home.com>

> > This is largely true, but we run across trouble with the way
> > the individual threads handle 'argv' variables and current
> > working directory.
> > 
> > CGI scripts typically pass data as variables to the script
> > (as argv).  These (unfortunately) are changed globally across
> > all Python interpreter threads, which can cause problems....
> > 
> > In addition, the current working directory is not unique
> > among independent Python interpreters.  So if a script changes
> > its directory to something, all other running scripts (in
> > unique python interpreter threads) now have their cwd set to
> > this directory.

There's no easy way to fix the current directory problem.  Just tell
your CGI programmers that os.chdir() is off-limits; you may remove it
from the os module (and from the posix module) during initialization
of your interpreter to enforce this.

I don't understand how you would be sharing sys.argv between multiple
interpreters.  Sure, the initial sys.argv is the same (they all
inherit that from the C main()) but after that you can set it to
whatever you want and they should not be shared.

Are you *sure* you are using PyInterpreterState_New() and not just
creating new threads?

> -> So we are basically stuck.  We can't link against Python multiple
> -> times, so our only avenue to provide multiple interpreter instances
> -> is to use the "Py_InterpreterNew" call and hope for the best.
> -> 
> -> Any hope for better interpreter isolation in 2.0? (2.1?)
> 
> Perhaps a better question is: is there any way to get around these problems
> without moving from a threaded model (which we like) to a process model?
> 
> Many of the problems we're running into because of this lack of interpreter
> isolation are due to the UNIX threading model, as I see it.  For example,
> the low-level file operation interference, cwd problems, and environment
> variable problems are all caused by UNIX's determination to share this stuff
> across threads.  I don't see any way of changing this without causing far
> more problems than we fix.

That's the whole point of using threads -- they share as much state as
they can.  I don't see how you can do better without going to
processes.  You could perhaps maintain the illusion of a per-thread
current directory, but you'd have to modify every function that uses
pathnames to take the simulated pwd into account...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Sep 12 22:48:47 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:48:47 -0500
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: Your message of "Tue, 12 Sep 2000 13:47:31 EST."
 <14782.31299.800325.803340@buffalo.fnal.gov>
References: <14782.31299.800325.803340@buffalo.fnal.gov>
Message-ID: <200009122148.QAA01404@cj20424-a.reston1.va.home.com>

> Python 1.5.2 (#3, Feb 11 2000, 15:30:14)  [GCC 2.7.2.3.f.1] on linux2
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> import rexec
> >>> r = rexec.RExec()
> >>> r.r_exec("import re")
> >>> 
> 
> Python 2.0b1 (#2, Sep  8 2000, 12:10:17) 
> [GCC 2.95.2 19991024 (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import rexec
> >>> r=rexec.RExec()
> >>> r.r_exec("import re")
> 
> Traceback (most recent call last):
[...]
> ImportError: No module named _sre
> 
> Of course I can work around this by doing:
> 
> >>> r.ok_builtin_modules += '_sre',
> >>> r.r_exec("import re")          
> 
> But I really shouldn't have to do this, right?  _sre is supposed to be
> a low-level implementation detail.  I think I should still be able to 
> "import re" in an restricted environment without having to be aware of
> _sre.

The rexec.py module needs to be fixed.  Should be simple enough.
There may be other modules that it should allow too!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Sep 12 22:52:45 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:52:45 -0500
Subject: [Python-Dev] urllib problems under 2.0
In-Reply-To: Your message of "Tue, 12 Sep 2000 21:12:20 +0200."
 <005e01c01ced$6bb19180$766940d5@hagrid>
References: <005e01c01ced$6bb19180$766940d5@hagrid>
Message-ID: <200009122152.QAA01423@cj20424-a.reston1.va.home.com>

> the proxy code in 2.0b1's new urllib is broken on my box.

Before you fix this, let's figure out what the rules for proxy
settings in the registry are supposed to be, and document these.
How do these get set?

(This should also be documented for Unix if it isn't already; problems
with configuring proxies are ever-recurring questions it seems.  I
haven't used a proxy in years so I'm not good at fixing it... :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Sep 12 22:55:48 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:55:48 -0500
Subject: [Python-Dev] XML runtime errors?
In-Reply-To: Your message of "Tue, 12 Sep 2000 21:40:04 +0200."
 <009601c01cf1$467458e0$766940d5@hagrid>
References: <009601c01cf1$467458e0$766940d5@hagrid>
Message-ID: <200009122155.QAA01452@cj20424-a.reston1.va.home.com>

[/F]
> stoopid question: why the heck is xmllib using
> "RuntimeError" to flag XML syntax errors?

Because it's too cheap to declare its own exception?

> raise RuntimeError, 'Syntax error at line %d: %s' % (self.lineno, message)
> 
> what's wrong with "SyntaxError"?

That would be the wrong exception unless it's parsing Python source
code.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From akuchlin@mems-exchange.org  Tue Sep 12 21:56:10 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Tue, 12 Sep 2000 16:56:10 -0400
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <200009122148.QAA01404@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Tue, Sep 12, 2000 at 04:48:47PM -0500
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com>
Message-ID: <20000912165610.A554@kronos.cnri.reston.va.us>

On Tue, Sep 12, 2000 at 04:48:47PM -0500, Guido van Rossum wrote:
>The rexec.py module needs to be fixed.  Should be simple enough.
>There may be other modules that it should allow too!

Are we sure that it's not possible to engineer segfaults or other
nastiness by deliberately feeding _sre bad data?  This was my primary
reason for not exposing the PCRE bytecode interface, since it would
have been difficult to make the code robust against hostile bytecodes.

--amk


From guido@beopen.com  Tue Sep 12 23:27:01 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 17:27:01 -0500
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: Your message of "Tue, 12 Sep 2000 16:56:10 -0400."
 <20000912165610.A554@kronos.cnri.reston.va.us>
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com>
 <20000912165610.A554@kronos.cnri.reston.va.us>
Message-ID: <200009122227.RAA01676@cj20424-a.reston1.va.home.com>

[AMK]
> Are we sure that it's not possible to engineer segfaults or other
> nastiness by deliberately feeding _sre bad data?  This was my primary
> reason for not exposing the PCRE bytecode interface, since it would
> have been difficult to make the code robust against hostile bytecodes.

Good point!

But how do we support using the re module in restricted mode then?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From skip@mojam.com (Skip Montanaro)  Tue Sep 12 22:26:49 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 12 Sep 2000 16:26:49 -0500 (CDT)
Subject: [Python-Dev] urllib problems under 2.0
In-Reply-To: <200009122152.QAA01423@cj20424-a.reston1.va.home.com>
References: <005e01c01ced$6bb19180$766940d5@hagrid>
 <200009122152.QAA01423@cj20424-a.reston1.va.home.com>
Message-ID: <14782.40857.437768.652808@beluga.mojam.com>

    Guido> (This should also be documented for Unix if it isn't already;
    Guido> problems with configuring proxies are ever-recurring questions it
    Guido> seems.  I haven't used a proxy in years so I'm not good at fixing
    Guido> it... :-)

Under Unix, proxy server specifications are simply URLs (or URIs?) that
specify a protocol ("scheme" in urlparse parlance), a host and (usually) a
port, e.g.:

    http_proxy='http://manatee.mojam.com:3128' ; export http_proxy

I've been having an ongoing discussion with a Windows user who seems to be
stumbling upon the same problem that Fredrik encountered.  If I read the
urllib.getproxies_registry code correctly, it looks like it's expecting a
string that doesn't include a protocol, e.g. simply
"manatee.mojam.com:3128".  This seems a bit inflexible to me, since you
might want to offer multiprotocol proxies through a single URI (though that
may well be what Windows offers its users).  For instance, I believe Squid
will proxy both ftp and http requests via HTTP.  Requiring ftp proxies to do
so via ftp seems inflexible.  My thought (and I can't test this) is that the
code around urllib.py line 1124 should be

                else:        # Use one setting for all protocols
                    proxies['http'] = proxyServer
                    proxies['ftp'] = proxyServer

but that's just a guess based upon the values this other fellow has sent me
and assumes that the Windows registry is supposed to hold proxy informations
that contains the protocol.  I cc'd Mark Hammond on my last email to the
user.  Perhaps he'll have something interesting to say when he gets up.

Skip


From fdrake@beopen.com  Tue Sep 12 22:26:17 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 12 Sep 2000 17:26:17 -0400 (EDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <200009122227.RAA01676@cj20424-a.reston1.va.home.com>
References: <14782.31299.800325.803340@buffalo.fnal.gov>
 <200009122148.QAA01404@cj20424-a.reston1.va.home.com>
 <20000912165610.A554@kronos.cnri.reston.va.us>
 <200009122227.RAA01676@cj20424-a.reston1.va.home.com>
Message-ID: <14782.40825.627148.54355@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > But how do we support using the re module in restricted mode then?

  Perhaps providing a bastion wrapper around the re module, which
would allow the implementation details to be completely hidden?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Tue Sep 12 22:50:53 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 23:50:53 +0200
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com> <20000912165610.A554@kronos.cnri.reston.va.us>
Message-ID: <01d701c01d03$86dfdfa0$766940d5@hagrid>

andrew wrote:
> Are we sure that it's not possible to engineer segfaults or other
> nastiness by deliberately feeding _sre bad data?

it's pretty easy to trick _sre into reading from the wrong place
(however, it shouldn't be possible to return such data to the
Python level, and you can write into arbitrary locations).

fixing this would probably hurt performance, but I can look into it.

can the Bastion module be used to wrap entire modules?

</F>



From Fredrik Lundh" <effbot@telia.com  Tue Sep 12 23:01:36 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 13 Sep 2000 00:01:36 +0200
Subject: [Python-Dev] XML runtime errors?
References: <009601c01cf1$467458e0$766940d5@hagrid>  <200009122155.QAA01452@cj20424-a.reston1.va.home.com>
Message-ID: <01f701c01d05$0aa98e20$766940d5@hagrid>

> [/F]
> > stoopid question: why the heck is xmllib using
> > "RuntimeError" to flag XML syntax errors?
> 
> Because it's too cheap to declare its own exception?

how about adding:

    class XMLError(RuntimeError):
        pass

(and maybe one or more XMLError subclasses?)

> > what's wrong with "SyntaxError"?
> 
> That would be the wrong exception unless it's parsing Python source
> code.

gotta fix netrc.py then...

</F>



From gstein@lyra.org  Tue Sep 12 22:50:54 2000
From: gstein@lyra.org (Greg Stein)
Date: Tue, 12 Sep 2000 14:50:54 -0700
Subject: [Python-Dev] PyWX (Python AOLserver plugin)
In-Reply-To: <20000912120355.A2457@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Tue, Sep 12, 2000 at 12:03:55PM -0600
References: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER> <20000912120355.A2457@keymaster.enme.ucalgary.ca>
Message-ID: <20000912145053.B22138@lyra.org>

On Tue, Sep 12, 2000 at 12:03:55PM -0600, Neil Schemenauer wrote:
>...
> On Tue, Sep 12, 2000 at 10:40:36AM -0700, Brent Fulgham wrote:
>...
> > This is largely true, but we run across trouble with the way
> > the individual threads handle 'argv' variables and current
> > working directory.

Are you using Py_NewInterpreter? If so, then it will use the same argv
across all interpreters that it creates. Use PyInterpreterState_New, you
have finer-grained control of what goes into an interpreter/thread state
pair.

> > CGI scripts typically pass data as variables to the script
> > (as argv).  These (unfortunately) are changed globally across
> > all Python interpreter threads, which can cause problems....

They're sharing a list, I believe. See above.

This will definitely be true if you have a single interpreter and multiple
thread states.

> > In addition, the current working directory is not unique
> > among independent Python interpreters.  So if a script changes
> > its directory to something, all other running scripts (in
> > unique python interpreter threads) now have their cwd set to
> > this directory.

As pointed out elsewhere, this is a factor of the OS, not Python. And
Python's design really isn't going to attempt to address this (it really
doesn't make much sense to change these semantics).

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From fdrake@beopen.com  Tue Sep 12 22:51:09 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 12 Sep 2000 17:51:09 -0400 (EDT)
Subject: [Python-Dev] New Python 2.0 documentation packages
Message-ID: <14782.42317.633120.757620@cj42289-a.reston1.va.home.com>

  I've just released a new version of the documentation packages for
the Python 2.0 beta 1 release.  These are versioned 2.0b1.1 and dated
today.  These include a variety of small improvements and additions,
but the big deal is:

    The Module Index is back!

  Pick it up at your friendly Python headquarters:

    http://www.pythonlabs.com/tech/python2.0/


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From brent.fulgham@xpsystems.com  Tue Sep 12 22:55:10 2000
From: brent.fulgham@xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 14:55:10 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>

> There's no easy way to fix the current directory problem.  Just tell
> your CGI programmers that os.chdir() is off-limits; you may remove it
> from the os module (and from the posix module) during initialization
> of your interpreter to enforce this.
>

This is probably a good idea.
 
[ ... snip ... ]

> Are you *sure* you are using PyInterpreterState_New() and not just
> creating new threads?
>
Yes.
 
[ ... snip ... ]

> > Many of the problems we're running into because of this 
> > lack of interpreter isolation are due to the UNIX threading 
> > model, as I see it. 

Titus -- any chance s/UNIX/pthreads/ ?  I.e., would using something
like AOLserver's threading libraries help by providing more
thread-local storage in which to squirrel away various environment
data, dictionaries, etc.?

> > For example, the low-level file operation interference, 
> > cwd problems, and environment variable problems are all caused 
> > by UNIX's determination to share this stuff across threads.  
> > I don't see any way of changing this without causing far
> > more problems than we fix.
> 
> That's the whole point of using threads -- they share as much state as
> they can.  I don't see how you can do better without going to
> processes.  You could perhaps maintain the illusion of a per-thread
> current directory, but you'd have to modify every function that uses
> pathnames to take the simulated pwd into account...
> 

I think we just can't be all things to all people, which is a point
Michael has patiently been making this whole time.  I propose:

1.  We disable os.chdir in PyWX initialization.
2.  We assume "standard" CGI behavior of CGIDIR being a single 
directory that all CGI's share.
3.  We address sys.argv (is this just a bug on our part maybe?)
4.  Can we address the os.environ leak similarly?  I'm trying to 
think of cases where a CGI really should be allowed to add to
the environment.  Maybe someone needs to set an environment variable
used by some other program that will be run in a subshell.  If
so, maybe we can somehow serialize activities that modify os.environ
in this way?

Idea:  If Python forks a subshell, it inherits the parent
process's environment.  That's probably the only time we really want
to let someone modify the os.environ -- so it can be passed to
a child.  What if we serialized through the fork somehow like so:

1.  Python script wants to set environment, makes call to os.environ
1a. We serialize here, so now we are single-threaded
2.  Script forks a subshell.
2b. We remove the entry we just added and release mutex.
3.  Execution continues.

This probably still won't work because the script might now expect
these variables to be in the environment dictionary.

Perhaps we can dummy up a fake os.environ dictionary per interpreter
thread that doesn't actually change the true UNIX environment?

What do you guys think...

Thanks,

-Brent


From cgw@fnal.gov  Tue Sep 12 22:57:51 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Tue, 12 Sep 2000 16:57:51 -0500 (CDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <20000912165610.A554@kronos.cnri.reston.va.us>
References: <14782.31299.800325.803340@buffalo.fnal.gov>
 <200009122148.QAA01404@cj20424-a.reston1.va.home.com>
 <20000912165610.A554@kronos.cnri.reston.va.us>
Message-ID: <14782.42719.159114.708604@buffalo.fnal.gov>

Andrew Kuchling writes:
 > On Tue, Sep 12, 2000 at 04:48:47PM -0500, Guido van Rossum wrote:
 > >The rexec.py module needs to be fixed.  Should be simple enough.
 > >There may be other modules that it should allow too!
 > 
 > Are we sure that it's not possible to engineer segfaults or other
 > nastiness by deliberately feeding _sre bad data?  This was my primary
 > reason for not exposing the PCRE bytecode interface, since it would
 > have been difficult to make the code robust against hostile bytecodes.

If it used to be OK to "import re" in restricted mode, and now it
isn't, then this is an incompatible change and needs to be documented.
There are people running webservers and stuff who are counting on
being able to use the re module in restricted mode.



From brent.fulgham@xpsystems.com  Tue Sep 12 22:58:40 2000
From: brent.fulgham@xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 14:58:40 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45B23@THRESHER>

> > Are you *sure* you are using PyInterpreterState_New() and not just
> > creating new threads?
> >
> Yes.
>  
Hold on.  This may be our error.

And I'm taking this traffic off python-dev now.  Thanks for 
all the helpful comments!

Regards,

-Brent


From guido@beopen.com  Wed Sep 13 00:07:40 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 18:07:40 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Tue, 12 Sep 2000 14:55:10 MST."
 <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
Message-ID: <200009122307.SAA02146@cj20424-a.reston1.va.home.com>

> 3.  We address sys.argv (is this just a bug on our part maybe?)

Probably.  The variables are not shared -- thir initial values are the
same.

> 4.  Can we address the os.environ leak similarly?  I'm trying to 
> think of cases where a CGI really should be allowed to add to
> the environment.  Maybe someone needs to set an environment variable
> used by some other program that will be run in a subshell.  If
> so, maybe we can somehow serialize activities that modify os.environ
> in this way?

You each get a copy of os.environ.

Running things in subshells from threads is asking for trouble!

But if you have to, you can write your own os.system() substitute that
uses os.execve() -- this allows you to pass in the environment
explicitly.

You may have to take out (override) the code that automatically calls
os.putenv() when an assignment into os.environment is made.

> Idea:  If Python forks a subshell, it inherits the parent
> process's environment.  That's probably the only time we really want
> to let someone modify the os.environ -- so it can be passed to
> a child.  What if we serialized through the fork somehow like so:
> 
> 1.  Python script wants to set environment, makes call to os.environ
> 1a. We serialize here, so now we are single-threaded
> 2.  Script forks a subshell.
> 2b. We remove the entry we just added and release mutex.
> 3.  Execution continues.
> 
> This probably still won't work because the script might now expect
> these variables to be in the environment dictionary.
> 
> Perhaps we can dummy up a fake os.environ dictionary per interpreter
> thread that doesn't actually change the true UNIX environment?

See above.  You can do it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jcollins@pacificnet.net  Wed Sep 13 01:05:03 2000
From: jcollins@pacificnet.net (jcollins@pacificnet.net)
Date: Tue, 12 Sep 2000 17:05:03 -0700 (PDT)
Subject: [Python-Dev] New Python 2.0 documentation packages
In-Reply-To: <14782.42317.633120.757620@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009121659550.995-100000@euclid.endtech.com>

Could you also include the .info files?  I have tried unsuccessfully to
build the .info files in the distribution.  Here is the output from make:

<stuff deleted>
make[2]: Leaving directory `/home/collins/Python-2.0b1/Doc/html'
make[1]: Leaving directory `/home/collins/Python-2.0b1/Doc'
../tools/mkinfo ../html/api/api.html
perl -I/home/collins/Python-2.0b1/Doc/tools
/home/collins/Python-2.0b1/Doc/tools/html2texi.pl
/home/collins/Python-2.0b1/Doc/html/api/api.html
<CODE>
  "__all__"
Expected string content of <A> in <DT>: HTML::Element=HASH(0x8241fbc) at
/home/collins/Python-2.0b1/Doc/tools/html2texi.pl line 550.
make: *** [python-api.info] Error 255


Thanks,

Jeff



On Tue, 12 Sep 2000, Fred L. Drake, Jr. wrote:

> 
>   I've just released a new version of the documentation packages for
> the Python 2.0 beta 1 release.  These are versioned 2.0b1.1 and dated
> today.  These include a variety of small improvements and additions,
> but the big deal is:
> 
>     The Module Index is back!
> 
>   Pick it up at your friendly Python headquarters:
> 
>     http://www.pythonlabs.com/tech/python2.0/
> 
> 
>   -Fred
> 
> 



From greg@cosc.canterbury.ac.nz  Wed Sep 13 02:20:06 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 13 Sep 2000 13:20:06 +1200 (NZST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <006e01c01cf0$921a4da0$766940d5@hagrid>
Message-ID: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <effbot@telia.com>:

> "map(None, seq)" uses None to indicate that there are really
> no function to map things through.

This one is just as controversial as print>>None. I would
argue that it *doesn't* mean "no function", because that
doesn't make sense -- there always has to be *some* function.
It really means "use a default function which constructs
a tuple from its arguments".

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From mhagger@alum.mit.edu  Wed Sep 13 06:08:57 2000
From: mhagger@alum.mit.edu (Michael Haggerty)
Date: Wed, 13 Sep 2000 01:08:57 -0400 (EDT)
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
Message-ID: <14783.3049.364561.641240@freak.kaiserty.com>

Brent Fulgham writes:
> Titus -- any chance s/UNIX/pthreads/ ?  I.e., would using something
> like AOLserver's threading libraries help by providing more
> thread-local storage in which to squirrel away various environment
> data, dictionaries, etc.?

The problem isn't a lack of thread-local storage.  The problem is that
*everything* in unix assumes a single environment and a single PWD.
Of course we could emulate a complete unix-like virtual machine within
every thread :-)

> Idea:  If Python forks a subshell, it inherits the parent
> process's environment.  That's probably the only time we really want
> to let someone modify the os.environ -- so it can be passed to
> a child.

Let's set os.environ to a normal dict (i.e., break the connection to
the process's actual environment) initialized to the contents of the
environment.  This fake environment can be passed to a child using
execve.  We would have to override os.system() and its cousins to use
execve with this fake environment.

We only need to figure out:

1. Whether we can just assign a dict to os.environ (and
   posix.environ?) to kill their special behaviors;

2. Whether such changes can be made separately in each interpreter
   without them affecting one another;

3. Whether special measures have to be taken to cause the fake
   environment dictionary to be garbage collected when the interpreter
   is destroyed.

Regarding PWD there's nothing we can realistically do except document
this limitation and clobber os.chdir() as suggested by Guido.

Michael

--
Michael Haggerty
mhagger@alum.mit.edu


From just@letterror.com  Wed Sep 13 09:33:15 2000
From: just@letterror.com (Just van Rossum)
Date: Wed, 13 Sep 2000 09:33:15 +0100
Subject: [Python-Dev] Challenge about print >> None
Message-ID: <l03102802b5e4e70319fa@[193.78.237.174]>

Vladimir Marangozov wrote:
>And let me add to that the following summary: the whole extended
>print idea is about convenience. Convenience for those that know
>what file redirection is. Not for newbies. You can't argue too much
>about extended print as an intuitive concept for newbies.

That's exactly what disturbs me, too. The main reason for the extended
print statement is to make it easier for newbies to solve this problem "ok,
now how do I print to a file other than sys.stdout?". The main flaw in this
reasoning is that a newbie doesn't neccesarily realize that when you print
something to the screen it actually goes through a _file_ object, so is
unlikely to ask that question. Or the other way round: someone asking that
question can hardly be considered a newbie. It takes quite a bit of
learning before someone can make the step from "a file is a thing on my
hard drive that stores data" to "a file is an abstract stream object". And
once you've made that step you don't really need extended print statement
that badly anymore.

>The present
>change disturbs experienced users (the >> syntax aside) and you get
>signals about that from them, because the current behavior does not
>comply with any existing concept as far as file redirection is concerned.
>However, since these guys are experienced and knowledgable, they already
>understand this game quite well. So what you get is just "Oh really? OK,
>this is messy" from the chatty ones and everybody moves on.  The others
>just don't care, but they not necessarily agree.

Amen.

Just




From guido@beopen.com  Wed Sep 13 13:57:03 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 13 Sep 2000 07:57:03 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Wed, 13 Sep 2000 01:08:57 -0400."
 <14783.3049.364561.641240@freak.kaiserty.com>
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
 <14783.3049.364561.641240@freak.kaiserty.com>
Message-ID: <200009131257.HAA04051@cj20424-a.reston1.va.home.com>

> Let's set os.environ to a normal dict (i.e., break the connection to
> the process's actual environment) initialized to the contents of the
> environment.  This fake environment can be passed to a child using
> execve.  We would have to override os.system() and its cousins to use
> execve with this fake environment.
> 
> We only need to figure out:
> 
> 1. Whether we can just assign a dict to os.environ (and
>    posix.environ?) to kill their special behaviors;

You only need to assign to os.environ; posix.environ is not magic.

> 2. Whether such changes can be made separately in each interpreter
>    without them affecting one another;

Yes -- each interpreter (if you use NewInterpreter or whatever) has
its own copy of the os module.

> 3. Whether special measures have to be taken to cause the fake
>    environment dictionary to be garbage collected when the interpreter
>    is destroyed.

No.

> Regarding PWD there's nothing we can realistically do except document
> this limitation and clobber os.chdir() as suggested by Guido.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From gvwilson@nevex.com  Wed Sep 13 13:58:58 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Wed, 13 Sep 2000 08:58:58 -0400 (EDT)
Subject: [Python-Dev] Academic Paper on Open Source
Message-ID: <Pine.LNX.4.10.10009130854520.2281-100000@akbar.nevex.com>

Yutaka Yamauchi has written an academic paper about Open Source
development methodology based in part on studying the GCC project:

http://www.lab7.kuis.kyoto-u.ac.jp/~yamauchi/papers/yamauchi_cscw2000.pdf

Readers of this list may find it interesting...

Greg
http://www.software-carpentry.com



From jack@oratrix.nl  Wed Sep 13 14:11:07 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 13 Sep 2000 15:11:07 +0200
Subject: [Python-Dev] Need some hands to debug MacPython installer
In-Reply-To: Message by Charles G Waldman <cgw@fnal.gov> ,
 Fri, 8 Sep 2000 18:41:12 -0500 (CDT) , <14777.31000.382351.905418@buffalo.fnal.gov>
Message-ID: <20000913131108.2F151303181@snelboot.oratrix.nl>

Charles,
sorry, I didn't see your message until now. Could you give me some information 
on the configuration of the mac involved? Ideally the output of "Apple System 
Profiler", which will be in the Apple-menu if you have it. It appears, though, 
that you're running an old MacOS, in which case you may not have it. Then what 
I'd like to know is the machine type, OS version, amount of memory.

> I am not a Mac user but I saw your posting and my wife has a Mac so I
> decided to give it a try. 
> 
> When I ran the installer, a lot of the text referred to "Python 1.6"
> despite this being a 2.0 installer.
> 
> As the install completed I got a message:  
> 
>  The application "Configure Python" could not be opened because
>  "OTInetClientLib -- OTInetGetSecondaryAddresses" could not be found
> 
> After that, if I try to bring up PythonIDE or PythonInterpreter by
> clicking on the 16-ton icons, I get the same message about
> OTInetGetSecondaryAddresses.  So I'm not able to run Python at all
> right now on this Mac.
> 

--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From Vladimir.Marangozov@inrialpes.fr  Wed Sep 13 14:58:53 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Wed, 13 Sep 2000 15:58:53 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <l03102802b5e4e70319fa@[193.78.237.174]> from "Just van Rossum" at Sep 13, 2000 09:33:15 AM
Message-ID: <200009131358.PAA01096@python.inrialpes.fr>

Just van Rossum wrote:
> 
> Amen.
> 

The good thing is that we discussed this relatively in time. Like other
minor existing Python features, this one is probably going to die in
a dark corner due to the following conclusions:

1. print >> None generates multiple interpretations. It doesn't really
   matter which one is right or wrong. There is confusion. Face it.

2. For many users, "print >>None makes the '>>None' part disappear"
   is perceived as too magic and inconsistent in the face of general
   public knowledge on redirecting output. Honor that opinion.

3. Any specialization of None is bad. None == sys.stdout is no better
   than None == NullFile. A bug in users code may cause passing None
   which will dump the output to stdout, while it's meant to go into
   a file (say, a web log). This would be hard to catch and once this
   bites you, you'll start adding extra checks to make sure you're not
   passing None. (IOW, the same -1 on NullFile applies to sys.stdout)

A safe recommendation is to back this out and make it raise an exception.
No functionality of _extended_ print is lost.

whatever-the-outcome-is,-update-the-PEP'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From DavidA@ActiveState.com  Wed Sep 13 17:24:12 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Wed, 13 Sep 2000 09:24:12 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.WNT.4.21.0009130921340.1496-100000@loom>

On Wed, 13 Sep 2000, Greg Ewing wrote:

> Fredrik Lundh <effbot@telia.com>:
> 
> > "map(None, seq)" uses None to indicate that there are really
> > no function to map things through.
> 
> This one is just as controversial as print>>None. I would
> argue that it *doesn't* mean "no function", because that
> doesn't make sense -- there always has to be *some* function.
> It really means "use a default function which constructs
> a tuple from its arguments".

Agreed. To take another example which I also find 'warty', 

	string.split(foo, None, 3)

doesn't mean "use no separators" it means "use whitespace separators which
can't be defined in a single string".

Thus, FWIW, I'm -1 on the >>None construct.  I'll have a hard time
teaching it, and I'll recommend against using it (unless and until
convinced otherwise, of course).

--david



From titus@caltech.edu  Wed Sep 13 18:09:42 2000
From: titus@caltech.edu (Titus Brown)
Date: Wed, 13 Sep 2000 10:09:42 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>; from brent.fulgham@xpsystems.com on Tue, Sep 12, 2000 at 02:55:10PM -0700
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
Message-ID: <20000913100942.G10010@cns.caltech.edu>

-> > There's no easy way to fix the current directory problem.  Just tell
-> > your CGI programmers that os.chdir() is off-limits; you may remove it
-> > from the os module (and from the posix module) during initialization
-> > of your interpreter to enforce this.
-> >
-> 
-> This is probably a good idea.

Finally, he says it ;).

-> > Are you *sure* you are using PyInterpreterState_New() and not just
-> > creating new threads?
-> >
-> Yes.

We're using Py_NewInterpreter().  I don't know how much Brent has said
(I'm not on the python-dev mailing list, something I intend to remedy)
but we have two basic types of environment: new interpreter and reused
interpreter.

Everything starts off as a new interpreter, created using Py_NewInterpreter().
At the end of a Web request, a decision is made about "cleaning up" the
interpreter for re-use, vs. destroying it.

Interpreters are cleaned for reuse roughly as follows (using really ugly
C pseudo-code with error checking removed):

---

PyThreadState_Clear(thread_state);
PyDict_Clear(main_module_dict);

// Add builtin module

bimod = PyImport_ImportModule("__builtin__");
PyDict_SetItemString(maindict, "__builtins__", bimod);

---

Some time ago, I decided not to use PyInterpreterState_New() because it
seemed unnecessary; Py_NewInterpreter() did everything we wanted and nothing
more.  Looking at the code for 1.5.2, Py_NewInterpreter():

1) creates a new interpreter state;
2) creates the first thread state for that interpreter;
3) imports builtin and sys, and sys.modules modules;
4) sets the path;
5) initializes main, as we do above in the reuse part;
6) (optionally) does site initialization.

Since I think we want to do all of that, I don't see any problems.  It seems
like the sys.argv stuff is a problem with PyWX, not with Python inherently.

cheers,
--titus


From skip@mojam.com (Skip Montanaro)  Wed Sep 13 18:48:10 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 13 Sep 2000 12:48:10 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <Pine.WNT.4.21.0009130921340.1496-100000@loom>
References: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>
 <Pine.WNT.4.21.0009130921340.1496-100000@loom>
Message-ID: <14783.48602.639962.38233@beluga.mojam.com>

    David> Thus, FWIW, I'm -1 on the >>None construct.  I'll have a hard
    David> time teaching it, and I'll recommend against using it (unless and
    David> until convinced otherwise, of course).

I've only been following this thread with a few spare neurons.  Even so, I
really don't understand what all the fuss is about.  From the discussions
I've read on this subject, I'm confident the string "print >>None" will
never appear in an actual program.  Instead, it will be used the way Guido
envisioned:

    def write(arg, file=None):
	print >>file, arg

It will never be used in interactive sessions.  You'd just type "print arg"
or "print >>file, arg".  Programmers will never use the name "None" when
putting prints in their code.  They will write "print >>file" where file can
happen to take on the value None.  I doubt new users will even notice it, so
don't bother mentioning it when teaching about the print statement.

I'm sure David teaches people how to use classes without ever mentioning
that they can fiddle a class's __bases__ attribute.  That feature seems much
more subtle and a whole lot more dangerous than "print >> None", yet I hear
no complaints about it.

The __bases__ example occurred to me because I had occasion to use it for
the first time a few days ago.  I don't even know how long the language has
supported it (obviously at least since 1.5.2).  Worked like a charm.
Without it, I would have been stuck making a bunch of subclasses of
cgi.FormContentDict, all because I wanted each of the subclasses I used to
have a __delitem__ method.  What was an "Aha!" followed by about thirty
seconds of typing would have been a whole mess of fiddling without
modifiable __bases__ attributes.  Would I expect the readers of this list to
understand what I did?  In a flash.  Would I mention it to brand new Python
programmers?  Highly unlikely.

It's great to make sure Python is approachable for new users.  I believe we
need to also continue improve Python's power for more advanced users.  That
doesn't mean turning it into Perl, but it does occasionally mean adding
features to the language that new users won't need in their first class
assignment.

+1 from me.  If Guido likes it, that's cool.

Skip



From gward@python.net  Thu Sep 14 03:53:51 2000
From: gward@python.net (Greg Ward)
Date: Wed, 13 Sep 2000 22:53:51 -0400
Subject: [Python-Dev] Re: packaging Tkinter separately from core Python
In-Reply-To: <200009131247.HAA03938@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Sep 13, 2000 at 07:47:46AM -0500
References: <14782.59951.901752.674039@bitdiddle.concentric.net> <200009131247.HAA03938@cj20424-a.reston1.va.home.com>
Message-ID: <20000913225351.A862@beelzebub>

On 13 September 2000, Guido van Rossum said:
> Hm.  Would it be easier to have Tkinter.py and friends be part of the
> core distribution, and place only _tkinter and Tcl/Tk in the Tkinter
> RPM?

That seems unnecessarily complex.

> If that's not good, I would recommend installing as a subdir of
> site-packages, with a .pth file pointing to that subdir, e.g.:

And that seems nice.  ;-)

Much easier to get the Distutils to install a .pth file than to do evil
trickery to make it install into, eg., the standard library: just use
the 'extra_path' option.  Eg. in the NumPy setup script
(distutils/examples/numpy_setup.py):

    extra_path = 'Numeric'

means put everything into a directory "Numeric" and create
"Numeric.pth".  If you want different names, you have to make
'extra_path' a tuple:

    extra_path = ('tkinter', 'tkinter-lib')

should get your example setup:

>   site-packages/
>               tkinter.pth		".../site-packages/tkinter-lib"
> 		tkinter-lib/
> 			    _tkinter.so
> 			    Tkinter.py
> 			    Tkconstants.py
> 			    ...etc...

But it's been a while since this stuff was tested.

BTW, is there any good reason to call that directory "tkinter-lib"
instead of "tkinter"?  Is that the preferred convention for directories-
full-of-modules that are not packages?

        Greg
-- 
Greg Ward                                      gward@python.net
http://starship.python.net/~gward/


From martin@loewis.home.cs.tu-berlin.de  Thu Sep 14 07:53:56 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 14 Sep 2000 08:53:56 +0200
Subject: [Python-Dev] Integer Overflow
Message-ID: <200009140653.IAA01702@loewis.home.cs.tu-berlin.de>

With the current CVS, I get surprising results

Python 2.0b1 (#47, Sep 14 2000, 08:51:18) 
[GCC 2.95.2 19991024 (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> 1*1
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
OverflowError: integer multiplication

What is causing this exception?

Curious,
Martin


From tim_one@email.msn.com  Thu Sep 14 08:04:27 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:04:27 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009121411.QAA30848@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>

[Tim]
> sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim

[Vladimir Marangozov]
> ...
> I would have preferred arguments. The PEP and your responses lack them
> which is another sign about this feature.

I'll suggest as an alternative that we have an enormous amount of work to
complete for the 2.0 release, and continuing to argue about this isn't
perceived as a reasonable use of limited time.

I've tried it; I like it; anything I say beyond that would just be jerkoff
rationalizing of the conclusion I'm *condemned* to support by my own
pleasant experience with it.  Same with Guido.

We went over it again at a PythonLabs mtg today, and compared to the other
20 things on our agenda, when it popped up we all agreed "eh" after about a
minute.  It has supporters and detractors, the arguments are getting all of
more elaborate, extreme and repetitive with each iteration, and positions
are clearly frozen already.  That's what a BDFL is for.  He's seen all the
arguments; they haven't changed his mind; and, sorry, but it's a tempest in
a teapot regardless.

how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly
    y'rs  - tim




From tim_one@email.msn.com  Thu Sep 14 08:14:14 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:14:14 -0400
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <200009140653.IAA01702@loewis.home.cs.tu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>

Works for me (Windows).  Local corruption?  Compiler optimization error?
Config screwup?  Clobber everything and rebuild.  If still a problem, turn
off optimization and try again.  If still a problem, write up what you know
and enter SourceForge bug, marking it platform-specific.

> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Martin v. Loewis
> Sent: Thursday, September 14, 2000 2:54 AM
> To: python-dev@python.org
> Subject: [Python-Dev] Integer Overflow
>
>
> With the current CVS, I get surprising results
>
> Python 2.0b1 (#47, Sep 14 2000, 08:51:18)
> [GCC 2.95.2 19991024 (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> 1*1
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> OverflowError: integer multiplication
>
> What is causing this exception?
>
> Curious,
> Martin
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev




From martin@loewis.home.cs.tu-berlin.de  Thu Sep 14 08:32:26 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 14 Sep 2000 09:32:26 +0200
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>
Message-ID: <200009140732.JAA02739@loewis.home.cs.tu-berlin.de>

> Works for me (Windows).  Local corruption?  Compiler optimization error?
> Config screwup?

Config screwup. I simultaneously try glibc betas, and 2.1.93 manages
to define LONG_BIT as 64 (due to testing whether INT_MAX is 2147483647
at a time when INT_MAX is not yet defined). Shifting by LONG_BIT/2 is
then a no-op, so ah=a, bh=b in int_mul. gcc did warn about this, but I
ignored/forgot about the warning.

I reported that to the glibc people, and worked-around it locally.

Sorry for the confusion,

Martin


From tim_one@email.msn.com  Thu Sep 14 08:44:37 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:44:37 -0400
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <200009140732.JAA02739@loewis.home.cs.tu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEPHHFAA.tim_one@email.msn.com>

Glad you found it!  Note that the result of shifting a 32-bit integer *by*
32 isn't defined in C (gotta love it ...), so "no-op" was lucky.

> -----Original Message-----
> From: Martin v. Loewis [mailto:martin@loewis.home.cs.tu-berlin.de]
> Sent: Thursday, September 14, 2000 3:32 AM
> To: tim_one@email.msn.com
> Cc: python-dev@python.org
> Subject: Re: [Python-Dev] Integer Overflow
>
>
> > Works for me (Windows).  Local corruption?  Compiler optimization error?
> > Config screwup?
>
> Config screwup. I simultaneously try glibc betas, and 2.1.93 manages
> to define LONG_BIT as 64 (due to testing whether INT_MAX is 2147483647
> at a time when INT_MAX is not yet defined). Shifting by LONG_BIT/2 is
> then a no-op, so ah=a, bh=b in int_mul. gcc did warn about this, but I
> ignored/forgot about the warning.
>
> I reported that to the glibc people, and worked-around it locally.
>
> Sorry for the confusion,
>
> Martin




From Vladimir.Marangozov@inrialpes.fr  Thu Sep 14 10:40:37 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 14 Sep 2000 11:40:37 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> from "Tim Peters" at Sep 14, 2000 03:04:27 AM
Message-ID: <200009140940.LAA02556@python.inrialpes.fr>

Tim Peters wrote:
> 
> I'll suggest as an alternative that we have an enormous amount of work to
> complete for the 2.0 release, and continuing to argue about this isn't
> perceived as a reasonable use of limited time.

Fair enough, but I had no choice: this feature was imposed without prior
discussion and I saw it too late to take a stance. I've done my job.

> 
> I've tried it; I like it; anything I say beyond that would just be jerkoff
> rationalizing of the conclusion I'm *condemned* to support by my own
> pleasant experience with it.  Same with Guido.

Nobody is condemned when receptive. You're inflexibly persistent here.

Remove the feature, discuss it, try providing arguments so that we can
agree (or disagree), write the PEP including a summary of the discussion,
then decide and add the feature.

In this particular case, I find Guido's attitude regarding the "rules of
the game" (that you have fixed, btw, PEPs included) quite unpleasant.

I speak for myself. Guido has invited me here so that I could share
my opinions and experience easily and that's what I'm doing in my spare
cycles (no, your agenda is not mine so I won't look at the bug list).
If you think I'm doing more harm than good, no problem. I'd be happy
to decline his invitation and quit.

I'll be even more explit:

There are organizational bugs in the functioning of this micro-society
that would need to be fixed first, IMHO. Other signs about this have
been expressed in the past too. Nobody commented. Silence can't rule
forever. Note that I'm not writing arguments for my own pleasure or to
scratch my nose. My time is precious enough, just like yours.

> 
> We went over it again at a PythonLabs mtg today, and compared to the other
> 20 things on our agenda, when it popped up we all agreed "eh" after about a
> minute.  It has supporters and detractors, the arguments are getting all of
> more elaborate, extreme and repetitive with each iteration, and positions
> are clearly frozen already.  That's what a BDFL is for.  He's seen all the
> arguments; they haven't changed his mind; and, sorry, but it's a tempest in
> a teapot regardless.

Nevermind.

Open your eyes, though.

pre-release-pressure-can-do-more-harm-than-it-should'ly ly
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From gward@mems-exchange.org  Thu Sep 14 14:03:28 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Thu, 14 Sep 2000 09:03:28 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Sep 11, 2000 at 09:27:10PM -0400
References: <200009112322.BAA29633@python.inrialpes.fr> <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>
Message-ID: <20000914090328.A31011@ludwig.cnri.reston.va.us>

On 11 September 2000, Tim Peters said:
> > So as long as one uses extended print, she's already an advanced user.
> 
> Nope!  "Now how did I get this to print to a file instead?" is one of the
> faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
> past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
> a file-like object?  what?! etc").

But that's only an argument for "print >>file"; it doesn't support
"print >>None" == "print >>sys.stdout" == "print" at all.

The only possible rationale I can see for that equivalence is in a
function that wraps print; it lets you get away with this:

    def my_print (string, file=None):
        print >> file, string

instead of this:

    def my_print (string, file=None):
        if file is None: file = sys.stdout
        print >> file, string

...which is *not* sufficient justification for the tortured syntax *and*
bizarre semantics.  I can live with the tortured ">>" syntax, but
coupled with the bizarre "None == sys.stdout" semantics, this is too
much.

Hmmm.  Reviewing my post, I think someone needs to decide what the
coding standard for ">>" is: "print >>file" or "print >> file"?  ;-)

        Greg


From gward@mems-exchange.org  Thu Sep 14 14:13:27 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Thu, 14 Sep 2000 09:13:27 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <20000914090328.A31011@ludwig.cnri.reston.va.us>; from gward@ludwig.cnri.reston.va.us on Thu, Sep 14, 2000 at 09:03:28AM -0400
References: <200009112322.BAA29633@python.inrialpes.fr> <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com> <20000914090328.A31011@ludwig.cnri.reston.va.us>
Message-ID: <20000914091326.B31011@ludwig.cnri.reston.va.us>

Oops.  Forgot to cast my votes:

+1 on redirectable print
-0 on the particular syntax chosen (not that it matters now)
-1 on None == sys.stdout (yes, I know it's more subtle than that,
      but that's just what it looks like)

IMHO "print >>None" should have the same effect as "print >>37" or
"print >>'foo'":

  ValueError: attempt to print to a non-file object

(as opposed to "print to file descriptor 37" and "open a file called
'foo' in append mode and write to it", of course.  ;-)

        Greg


From peter@schneider-kamp.de  Thu Sep 14 14:07:19 2000
From: peter@schneider-kamp.de (Peter Schneider-Kamp)
Date: Thu, 14 Sep 2000 15:07:19 +0200
Subject: [Python-Dev] Re: timeouts  (Was: checking an ip)
References: <SOLv5.8548$l6.467825@zwoll1.home.nl> <39BF9585.FC4C9CB1@schneider-kamp.de> <8po6ei$893$1@sunnews.cern.ch> <013601c01e1f$2f8dde60$978647c1@DEVELOPMENT>
Message-ID: <39C0CD87.396302EC@schneider-kamp.de>

I have proposed the inclusion of Timothy O'Malley's timeoutsocket.py
into the standard socket module on python-dev, but there has not been
a single reply in four weeks.

http://www.python.org/pipermail/python-dev/2000-August/015111.html

I think there are four possibilities:
1) add a timeoutsocket class to Lib/timeoutsocket.py
2) add a timeoutsocket class to Lib/socket.py
3) replace the socket class in Lib/socket.py
4) wait until the interval is down to one day

feedback-hungri-ly y'rs
Peter

Ulf Engstrøm schrieb:
> 
> I'm thinking this is something that should be put in the distro, since it
> seems a lot of people are asking for it all the time. I'm using select, but
> it'd be even better to have a proper timeout on all the socket stuff. Not to
> mention timeout on input and raw_input. (using select on those are platform
> dependant). Anyone has a solution to that?
> Are there any plans to put in timeouts? Can there be? :)
> Regards
> Ulf
> 
> > sigh...
> > and to be more precise, look at yesterday's post labelled
> > nntplib timeout bug?
> > interval between posts asking about timeout for sockets is already
> > down to 2 days.. great :-)
> 
> --
> http://www.python.org/mailman/listinfo/python-list


From garabik@atlas13.dnp.fmph.uniba.sk  Thu Sep 14 15:58:35 2000
From: garabik@atlas13.dnp.fmph.uniba.sk (Radovan Garabik)
Date: Thu, 14 Sep 2000 18:58:35 +0400
Subject: [Python-Dev] Re: [Fwd: Re: timeouts  (Was: checking an ip)]
In-Reply-To: <39C0D268.61F35DE8@schneider-kamp.de>; from peter@schneider-kamp.de on Thu, Sep 14, 2000 at 03:28:08PM +0200
References: <39C0D268.61F35DE8@schneider-kamp.de>
Message-ID: <20000914185835.A4080@melkor.dnp.fmph.uniba.sk>

On Thu, Sep 14, 2000 at 03:28:08PM +0200, Peter Schneider-Kamp wrote:
> 
> I have proposed the inclusion of Timothy O'Malley's timeoutsocket.py
> into the standard socket module on python-dev, but there has not been
> a single reply in four weeks.
> 
> http://www.python.org/pipermail/python-dev/2000-August/015111.html
> 
> I think there are four possibilities:
> 1) add a timeoutsocket class to Lib/timeoutsocket.py

why not, it won't break anything
but timeoutsocket.py needs a bit of "polishing" in this case
and some testing... I had some strange errors on WinNT
with timeout_socket (everything worked flawlessly on linux),
but unfortunately I am now away from that (or any other Winnt) computer 
and cannot do any tests.

> 2) add a timeoutsocket class to Lib/socket.py

possible

> 3) replace the socket class in Lib/socket.py

this could break some applications... especially
if you play with changing blocking/nonblocking status of socket
in them

> 4) wait until the interval is down to one day

5) add timeouts at the C level to socketmodule

this would be probably the right solution, but 
rather difficult to write.


and, of course, both timeout_socket and timeoutsocket
should be looked at rather closely. (I dismantled 
timeout_socket when I was hunting bugs in it, but have not
done it with timeoutsocket)


-- 
 -----------------------------------------------------------
| Radovan Garabik http://melkor.dnp.fmph.uniba.sk/~garabik/ |
| __..--^^^--..__    garabik @ melkor.dnp.fmph.uniba.sk     |
 -----------------------------------------------------------
Antivirus alert: file .signature infected by signature virus.
Hi! I'm a signature virus! Copy me into your signature file to help me spread!


From skip@mojam.com (Skip Montanaro)  Thu Sep 14 16:17:03 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 14 Sep 2000 10:17:03 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
References: <200009121411.QAA30848@python.inrialpes.fr>
 <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
Message-ID: <14784.60399.893481.717232@beluga.mojam.com>

    Tim> how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly

I find the way python-bugs is working these days extremely bizarre.  Is it
resending a bug when there's some sort of change?  A few I've examined were
originally submitted in 1999.  Are they just now filtering out of jitterbug
or have they had some comment added that I don't see?

Skip



From paul@prescod.net  Thu Sep 14 16:28:14 2000
From: paul@prescod.net (Paul Prescod)
Date: Thu, 14 Sep 2000 08:28:14 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
Message-ID: <39C0EE8E.770CAA17@prescod.net>

Tim Peters wrote:
> 
>...
> 
> We went over it again at a PythonLabs mtg today, and compared to the other
> 20 things on our agenda, when it popped up we all agreed "eh" after about a
> minute.  It has supporters and detractors, the arguments are getting all of
> more elaborate, extreme and repetitive with each iteration, and positions
> are clearly frozen already.  That's what a BDFL is for.  He's seen all the
> arguments; they haven't changed his mind; and, sorry, but it's a tempest in
> a teapot regardless.

All of the little hacks and special cases add up.

In the face of all of this confusion the safest thing would be to make
print >> None illegal and then figure it out for Python 2.1. 

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html


From jeremy@beopen.com  Thu Sep 14 16:38:56 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 11:38:56 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <14784.60399.893481.717232@beluga.mojam.com>
References: <200009121411.QAA30848@python.inrialpes.fr>
 <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
 <14784.60399.893481.717232@beluga.mojam.com>
Message-ID: <14784.61712.512770.129447@bitdiddle.concentric.net>

>>>>> "SM" == Skip Montanaro <skip@mojam.com> writes:

  Tim> how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly

  SM> I find the way python-bugs is working these days extremely
  SM> bizarre.  Is it resending a bug when there's some sort of
  SM> change?  A few I've examined were originally submitted in 1999.
  SM> Are they just now filtering out of jitterbug or have they had
  SM> some comment added that I don't see?

Yes.  SF resends the entire bug report for every change to the bug.
If you change the priority for 5 to 4 or do anything else, it sends
mail.  It seems like too much mail to me, but better than no mail at
all.

Also note that the bugs list gets a copy of everything.  The submittor
and current assignee for each bug also get an email.

Jeremy


From jeremy@beopen.com  Thu Sep 14 16:48:50 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 11:48:50 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009140940.LAA02556@python.inrialpes.fr>
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
 <200009140940.LAA02556@python.inrialpes.fr>
Message-ID: <14784.62306.209688.587211@bitdiddle.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr> writes:

  VM> Remove the feature, discuss it, try providing arguments so that
  VM> we can agree (or disagree), write the PEP including a summary of
  VM> the discussion, then decide and add the feature.

The last step in the PEP process is for Guido to accept or reject a
PEP.  Since he is one of the primary advocates of the print >>None
behavior, I don't see why we should do what you suggest.  Presumably
Guido will continue to want the feature.

  VM> In this particular case, I find Guido's attitude regarding the
  VM> "rules of the game" (that you have fixed, btw, PEPs included)
  VM> quite unpleasant.

What is Guido's attitude?  What are the "rules of the game"?

  VM> I speak for myself. Guido has invited me here so that I could
  VM> share my opinions and experience easily and that's what I'm
  VM> doing in my spare cycles (no, your agenda is not mine so I won't
  VM> look at the bug list).  If you think I'm doing more harm than
  VM> good, no problem. I'd be happy to decline his invitation and
  VM> quit.

You're a valued member of this community.  We welcome your opinions
and experience.  It appears that in this case, Guido's opinions and
experience lead to a different conclusion that yours.  I am not
thrilled with the print >> None behavior myself, but I do not see the
value of pursuing the issue at length.

  VM> I'll be even more explit:

  VM> There are organizational bugs in the functioning of this
  VM> micro-society that would need to be fixed first, IMHO. Other
  VM> signs about this have been expressed in the past too. Nobody
  VM> commented. Silence can't rule forever. Note that I'm not writing
  VM> arguments for my own pleasure or to scratch my nose. My time is
  VM> precious enough, just like yours.

If I did not comment on early signs of organizational bugs, it was
probably because I did not see them.  We did a lot of hand-wringing
several months ago about the severage backlog in reviewing patches and
bugs.  We're making good progress on both the backlogs.  We also
formalized the design process for major language features.  Our
execution of that process hasn't been flawless, witness the features
in 2.0b1 that are still waiting for their PEPs to be written, but the
PEP process was instituted late in the 2.0 release process.

Jeremy


From Fredrik Lundh" <effbot@telia.com  Thu Sep 14 17:05:05 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 18:05:05 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net>
Message-ID: <00c201c01e65$8d327bc0$766940d5@hagrid>

Paul wrote:
> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1.

Really?  So what's the next feature we'll have to take out after
some other python-dev member threatens to leave if he cannot
successfully force his ideas onto Guido and everyone else?

</F>

    "I'm really not a very nice person. I can say 'I don't care' with
    a straight face, and really mean it."
    -- Linus Torvalds, on why the B in BDFL really means "bastard"



From paul@prescod.net  Thu Sep 14 17:16:12 2000
From: paul@prescod.net (Paul Prescod)
Date: Thu, 14 Sep 2000 09:16:12 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid>
Message-ID: <39C0F9CC.C9ECC35E@prescod.net>

Fredrik Lundh wrote:
> 
> Paul wrote:
> > In the face of all of this confusion the safest thing would be to make
> > print >> None illegal and then figure it out for Python 2.1.
> 
> Really?  So what's the next feature we'll have to take out after
> some other python-dev member threatens to leave if he cannot
> successfully force his ideas onto Guido and everyone else?

There have been several participants, all long-time Python users, who
have said that this None thing is weird. Greg Ward, who even likes
*Perl* said it is weird.

By my estimation there are more voices against then for and those that
are for are typically lukewarm ("I hated it at first but don't hate it
as much anymore"). Therefore I don't see any point in acting as if this
is single man's crusade.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html


From akuchlin@mems-exchange.org  Thu Sep 14 17:32:57 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 14 Sep 2000 12:32:57 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39C0F9CC.C9ECC35E@prescod.net>; from paul@prescod.net on Thu, Sep 14, 2000 at 09:16:12AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid> <39C0F9CC.C9ECC35E@prescod.net>
Message-ID: <20000914123257.C31741@kronos.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 09:16:12AM -0700, Paul Prescod wrote:
>By my estimation there are more voices against then for and those that
>are for are typically lukewarm ("I hated it at first but don't hate it
>as much anymore"). Therefore I don't see any point in acting as if this
>is single man's crusade.

Indeed.  On the other hand, this issue is minor enough that it's not
worth walking away from the community over; walk away if you no longer
use Python, or if it's not fun any more, or if the tenor of the
community changes.  Not because of one particular bad feature; GvR's
added bad features before, but we've survived.  

(I should be thankful, really, since the >>None feature means more
material for my Python warts page.)

--amk



From Fredrik Lundh" <effbot@telia.com  Thu Sep 14 18:07:58 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 19:07:58 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid> <39C0F9CC.C9ECC35E@prescod.net>
Message-ID: <003a01c01e6e$56aa2180$766940d5@hagrid>

paul wrote:
> Therefore I don't see any point in acting as if this is single man's crusade.

really?  who else thinks that this little feature "shows that the rules
are fixed" and "my time is too precious to work on bug fixes" and "we're
here to vote, not to work" and "since my veto doesn't count, there are
organizational bugs". 

can we have a new mailing list, please?  one that's only dealing with
cool code, bug fixes, release administratrivia, etc.  practical stuff, not
ego problems.

</F>



From help@python.org  Thu Sep 14 18:28:54 2000
From: help@python.org (Martin von Loewis)
Date: Thu, 14 Sep 2000 19:28:54 +0200 (MET DST)
Subject: [Python-Dev] Re: [Python-Help] Bug in PyTuple_Resize
In-Reply-To: <200009141413.KAA21765@enkidu.stsci.edu> (delapena@stsci.edu)
References: <200009141413.KAA21765@enkidu.stsci.edu>
Message-ID: <200009141728.TAA04901@pandora.informatik.hu-berlin.de>

> Thank you for the response.  Unfortunately, I do not have the know-how at
> this time to solve this problem!  I did submit my original query and
> your response to the sourceforge bug tracking mechanism this morning.

I spent some time with this bug, and found that it is in some
unrelated code: the tuple resizing mechanism is is buggy if cyclic gc
is enabled. A patch is included below. [and in SF patch 101509]

It just happens that this code is rarely used: in _tkinter, when
filtering tuples, and when converting sequences to tuples. And even
then, the bug triggers on most systems only for _tkinter: the tuple
gets smaller in filter, so realloc(3C) returns the same adress;
tuple() normally succeeds in knowing the size in advance, so no resize
is necessary.

Regards,
Martin

Index: tupleobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/tupleobject.c,v
retrieving revision 2.44
diff -u -r2.44 tupleobject.c
--- tupleobject.c	2000/09/01 23:29:27	2.44
+++ tupleobject.c	2000/09/14 17:12:07
@@ -510,7 +510,7 @@
 		if (g == NULL) {
 			sv = NULL;
 		} else {
-			sv = (PyTupleObject *)PyObject_FROM_GC(g);
+			sv = (PyTupleObject *)PyObject_FROM_GC(sv);
 		}
 #else
 		sv = (PyTupleObject *)


From Vladimir.Marangozov@inrialpes.fr  Thu Sep 14 22:34:24 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 14 Sep 2000 16:34:24 -0500
Subject: [Python-Dev] See you later, folks!
Message-ID: <200009142134.QAA07143@cj20424-a.reston1.va.home.com>

[Vladimir asked me to post this due to python-dev mailing lists, and
to subsequently turn off his subscriptions.  Come back soon, Vladimir!
--Guido]

The time has come for me to leave you for some time. But rest assured,
not for the reasons you suspect <wink>. I'm in the process of changing
jobs & country. Big changes, that is.

So indeed, I'll unsubscribe from the python-dev list for a while and
indeed, I won't look at the bug list because I won't be able to, not
because I don't want to. (I won't be able to handle more patches for
that matter, sorry!)

Regarding the latest debate about extended print, things are surely
not so extreme as they sounded to Fredrik! So take it easy. I still
can sign with both hands what I've said, though, although you must
know that whenever I engage in the second round of a debate, I have
reasons to do so and my writing style becomes more pathetic, indeed.
But remember that python-dev is a place where educated opinions are being
confronted. The "bug" I referred to is that Guido, as the principal
proponent of a feature has not entered the second round of this debate
to defend it, despite the challenge I have formulated and subsequently
argued (I understand that he might have felt strange after reading my
posts). I apologize for my style if you feel that I should. I would
quit python-dev in the sense that if there are no more debates, I am
little to no interested in participating. That's what happens when,
for instance, Guido exercises his power prematurely which is not a
good thing, overall.

In short, I suddenly felt like I had to clarify this situation, secretly
knowing that Guido & Tim and everybody else (except Fredrik, but I
forgive him <wink>) understands the many points I've raised. This
debate would be my latest "contribution" for some time.

Last but not least, I must say that I deeply respect Guido & Tim and
everybody else (including Fredrik <wink>) for their knowledge and
positive attitude!  (Tim, I respect your fat ass too <wink> -- he does
a wonderful job on c.l.py!)

See you later!

knowledge-cannot-shrink!-it-can-only-extended-and-so-should-be-print'ly
truly-None-forbidding'ly y'rs
--
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From guido@beopen.com  Thu Sep 14 23:15:49 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 17:15:49 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Thu, 14 Sep 2000 08:28:14 MST."
 <39C0EE8E.770CAA17@prescod.net>
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
 <39C0EE8E.770CAA17@prescod.net>
Message-ID: <200009142215.RAA07332@cj20424-a.reston1.va.home.com>

> All of the little hacks and special cases add up.
> 
> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1. 

Sorry, no deal.  print>>file and print>>None are here to stay.

Paul, I don't see why you keep whining about this.  Sure, it's the
feature that everybody loves to hate.  But what's the big deal?  Get
over it.  I don't believe for a second that there is a trend that I've
stopped listening.  To the contrary, I've spent a great deal of time
reading to the arguments against this feature and its refinement, and
I simply fail to be convinced by the counter-arguments.

If this had been in the language from day one nobody would have
challenged it.  (And I've used my time machine to prove it, so don't
argue. :-)

If you believe I should no longer be the BDFL, say so, but please keep
it out of python-dev.  We're trying to get work done here.  You're an
employee of a valued member of the Python Consortium.  As such you can
request (through your boss) to be subscribed to the Consortium mailing
list.  Feel free to bring this up there -- there's not much else going
on there.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Thu Sep 14 22:28:33 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 17:28:33 -0400 (EDT)
Subject: [Python-Dev] Revised release schedule
Message-ID: <14785.17153.995000.379187@bitdiddle.concentric.net>

I just updated PEP 200 with some new details about the release
schedule.  These details are still open to some debate, but they need
to be resolved quickly.

I propose that we release 2.0 beta 2 on 26 Sep 2000.  That's one week
from this coming Tuesday.  This would be the final beta.  The final
release would be two weeks after that on 10 Oct 2000.

The feature freeze we imposed before the first beta is still in effect
(more or less).  We should only be adding new features when they fix
crucial bugs.  In order to allow time to prepare the release, all
changes should be made by the end of the day on Sunday, 24 Sep.

There is still a lot of work that remains to resolve open patches and
fix as many bugs as possible.  I have re-opened a number of patches
that were postponed prior to the 2.0b1 release.  It is not clear that
all of these patches should be accepted, but some of them may be
appropriate for inclusion now.  

There is also a large backlog of old bugs and a number of new bugs
from 2.0b1.  Obviously, we need to get these new bugs resolved and
make a dent in the old bugs.  I'll send a note later today with some
guidelines for bug triage.

Jeremy


From guido@beopen.com  Thu Sep 14 23:25:37 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 17:25:37 -0500
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 08:28:14 MST."
 <39C0EE8E.770CAA17@prescod.net>
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
 <39C0EE8E.770CAA17@prescod.net>
Message-ID: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>

> In the face of all of this confusion the safest thing would be to make
> [...] illegal and then figure it out for Python 2.1. 

Taking out controversial features is a good idea in some cases, in
order to prevent likely disasters.

I've heard that the xml support in 2.0b1 is broken, and that it's not
clear that it will be possible to fix it in time (the 2.0b1 release is
due in two weeks).  The best thing here seems to remove it and put it
back in 2.1 (due 3-6 months after 2.0).  In the mean time, the XML-sig
can release its own version.

The way I understand the situation right now is that there are two
packages claiming the name xml; one in the 2.0 core and one released
by the XML-sig.  While the original intent was for the XML-sig package
to be a superset of the core package, this doesn't appear to be
currently the case, even if the brokenness of the core xml package can
be fixed.

We absolutely cannot have a situation where there could be two
applications, one working only with the xml-sig's xml package, and the
other only with the 2.0 core xml package.  If at least one direction
of compatibility cannot be guaranteed, I propose that one of the
packages be renamed.  We can either rename the xml package to be
released with Python 2.0 to xmlcore, or we can rename the xml-sig's
xml package to xmlsig (or whatever they like).  (Then when in 2.1 the
issue is resolved, we can rename the compatible solution back to xml.)

Given that the xml-sig already has released packages called xml, the
best solution (and one which doesn't require the cooperation of the
xml-sig!) is to rename the 2.0 core xml package to xmlcore.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From tim_one@email.msn.com  Thu Sep 14 22:28:22 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 17:28:22 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009140940.LAA02556@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEBCHGAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> Nobody is condemned when receptive. You're inflexibly persistent here.

I'm terse due to lack of both time for, and interest in, this issue.  I'm
persistent because Guido already ruled on this, has explicitly declined to
change his mind, and that's the way this language has always evolved.  Had
you hung around Python in the early days, there was often *no* discussion
about new features:  they just showed up by surprise.  Since that's how
lambda got in, maybe Guido started Python-Dev to oppose future mistakes like
that <wink>.

> Remove the feature, discuss it, try providing arguments so that we can
> agree (or disagree), write the PEP including a summary of the discussion,
> then decide and add the feature.

It was already very clear that that's what you want.  It should have been
equally clear that it's not what you're going to get on this one.  Take it
up with Guido if you must, but I'm out of it.

> In this particular case, I find Guido's attitude regarding the "rules of
> the game" (that you have fixed, btw, PEPs included) quite unpleasant.
>
> I speak for myself. Guido has invited me here so that I could share
> my opinions and experience easily and that's what I'm doing in my spare
> cycles (no, your agenda is not mine so I won't look at the bug list).

Then understand that my agenda is Guido's, and not only because he's my
boss.  Slashing the bug backlog *now* is something he believes is important
to Python's future, and evidently far more important to him than this
isolated little print gimmick.  It's also my recollection that he started
Python-Dev to get help on decisions that were important to him, not to
endure implacable opposition to every little thing he does.

If he debated every issue brought up on Python-Dev alone to the satisfaction
of just the people here, he would have time for nothing else.  That's the
truth.  As it is, he tells me he spends at least 2 hours every day just
*reading* Python-Dev, and I believe that, because I do too.  So long as this
is a dictatorship, I think it's impossible for people not to feel slighted
at times.  That's the way it's always been, and it's worked very well
despite that.

And I'll tell you something:  there is *nobody* in the history of Python who
has had more suggestions and "killer arguments" rejected by Guido than me.
I got over that in '93, though.  Play with him when you agree, back off when
he says "no".  That's what works.

> If you think I'm doing more harm than good, no problem. I'd be happy
> to decline his invitation and quit.

In general I think Guido believes your presence here is extremely helpful.
I know that I do.  On this particular issue, though, no, continuing to beat
on something after Guido says "case closed" isn't helpful.

> I'll be even more explit:
>
> There are organizational bugs in the functioning of this micro-society
> that would need to be fixed first, IMHO. Other signs about this have
> been expressed in the past too. Nobody commented.

People have been griping about the way Python is run since '91, so I'm not
buying the idea that this is something new.  The PEP process *is* something
new and has been of very mixed utility so far, but is particularly
handicapped at the start due to the need to record old decisions whose
*real* debates actually ended a long time ago.

I certainly agree that the way this particular gimmick got snuck in violated
"the rules", and if it were anyone other than Guido who did it I'd be
skinning them alive.  I figure he's entitled, though.  Don't you?

> Silence can't rule forever. Note that I'm not writing arguments for
> my own pleasure or to scratch my nose. My time is precious enough, just
> like yours.

Honestly, I don't know why you've taken your time to pursue this repeatedly.
Did Guido say something to suggest that he might change his mind?  I didn't
see it.

> ...
> Open your eyes, though.

I believe they're open, but that we're seeing different visions of how
Python *should* be run.

> pre-release-pressure-can-do-more-harm-than-it-should'ly ly

We've held a strict line on "bugfixes only" since 2.0b1 went out the door,
and I've indeed spent many an hour debating that with the feature-crazed
too.  The debates about all that, and all this, and the license mess, are
sucking my life away.  I still think we're doing a damned good job, though
<wink>.

over-and-out-ly y'rs  - tim




From tim_one@email.msn.com  Thu Sep 14 22:28:25 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 17:28:25 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39C0EE8E.770CAA17@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBCHGAA.tim_one@email.msn.com>

[Paul Prescod]
> All of the little hacks and special cases add up.

Yes, they add up to a wonderful language <0.9 wink>.

> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1.

There's no confusion in Guido's mind, though.

Well, not on this.  I'll tell you he's *real* confused about xml, though:
we're getting reports that the 2.0b1 version of the xml package is unusably
buggy.  If *that* doesn't get fixed, xml will get tossed out of 2.0final.
Fred Drake has volunteered to see what he can do about that, but it's
unclear whether he can make enough time to pursue it.




From Fredrik Lundh" <effbot@telia.com  Thu Sep 14 22:46:11 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 23:46:11 +0200
Subject: [Python-Dev] Re: [Python-Help] Bug in PyTuple_Resize
References: <200009141413.KAA21765@enkidu.stsci.edu> <200009141728.TAA04901@pandora.informatik.hu-berlin.de>
Message-ID: <005201c01e95$3741e680$766940d5@hagrid>

martin wrote:
> I spent some time with this bug, and found that it is in some
> unrelated code: the tuple resizing mechanism is is buggy if cyclic gc
> is enabled. A patch is included below. [and in SF patch 101509]

wow, that was quick!

I've assigned the bug back to you.  go ahead and check
it in, and mark the bug as closed.

thanks /F



From akuchlin@mems-exchange.org  Thu Sep 14 22:47:19 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 14 Sep 2000 17:47:19 -0400
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 14, 2000 at 05:25:37PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com>
Message-ID: <20000914174719.A29499@kronos.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
>by the XML-sig.  While the original intent was for the XML-sig package
>to be a superset of the core package, this doesn't appear to be
>currently the case, even if the brokenness of the core xml package can
>be fixed.

I'd be more inclined to blame the XML-SIG package; the last public
release is quite elderly, and the CVS tree hasn't been updated to be a
superset of the xml/ package in the Python tree.  However, if you want
to drop the Lib/xml/ package from Python, I have no objections at all;
I never wanted it in the first place.

--amk



From Fredrik Lundh" <effbot@telia.com  Thu Sep 14 23:16:32 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 15 Sep 2000 00:16:32 +0200
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
Message-ID: <000701c01e99$d0fac9a0$766940d5@hagrid>

http://www.upside.com/texis/mvm/story?id=39c10a5e0

</F>



From guido@beopen.com  Fri Sep 15 00:14:52 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 18:14:52 -0500
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 17:47:19 -0400."
 <20000914174719.A29499@kronos.cnri.reston.va.us>
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com>
 <20000914174719.A29499@kronos.cnri.reston.va.us>
Message-ID: <200009142314.SAA08092@cj20424-a.reston1.va.home.com>

> On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
> >by the XML-sig.  While the original intent was for the XML-sig package
> >to be a superset of the core package, this doesn't appear to be
> >currently the case, even if the brokenness of the core xml package can
> >be fixed.
> 
> I'd be more inclined to blame the XML-SIG package; the last public
> release is quite elderly, and the CVS tree hasn't been updated to be a
> superset of the xml/ package in the Python tree.  However, if you want
> to drop the Lib/xml/ package from Python, I have no objections at all;
> I never wanted it in the first place.

It's easy to blame.  (Aren't you responsible for the XML-SIG releases? :-)

I can't say that I wanted the xml package either -- I thought that the
XML-SIG wanted it, and insisted that it be called 'xml', conflicting
with their own offering.  I'm not part of that group, and have no time
to participate in a discussion there or read their archives.  Somebody
please get their attention -- otherwise it *will* be removed from 2.0!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Thu Sep 14 23:42:00 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 18:42:00 -0400 (EDT)
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
In-Reply-To: <000701c01e99$d0fac9a0$766940d5@hagrid>
References: <000701c01e99$d0fac9a0$766940d5@hagrid>
Message-ID: <14785.21560.61961.86040@bitdiddle.concentric.net>

I like Python plenty, but Emacs is my favorite operating system.

Jeremy


From MarkH@ActiveState.com  Thu Sep 14 23:37:22 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 15 Sep 2000 09:37:22 +1100
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <20000914174719.A29499@kronos.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEGHDJAA.MarkH@ActiveState.com>

[Guido]
> On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
> >by the XML-sig.  While the original intent was for the XML-sig package
> >to be a superset of the core package, this doesn't appear to be
> >currently the case, even if the brokenness of the core xml package can
> >be fixed.

[Andrew]
> I'd be more inclined to blame the XML-SIG package;

Definately.  This XML stuff has cost me a number of hours a number of
times!  Always with other people's code, so I didnt know where to turn.

Now we find Guido saying things like:

> > the best solution (and one which doesn't require
> > the cooperation of the xml-sig!) is to rename
> > the 2.0 core xml package to xmlcore.

What is going on here?  We are forced to rename a core package, largely to
avoid the cooperation of, and avoid conflicting with, a SIG explicitly
setup to develop this core package in the first place!!!

How did this happen?  Does the XML SIG need to be shut down (while it still
can <wink>)?

> However, if you want to drop the Lib/xml/ package from
> Python, I have no objections at all; I never wanted it
> in the first place.

Agreed.  It must be dropped if it can not be fixed.  As it stands, an
application can make no assumptions about what xml works.

But IMO, the Python core has first grab at the name "xml" - if we can't get
the cooperation of the SIG, it should be their problem.  Where do we want
to be with respect to XML in a few years?  Surely not with some half-assed
"xmlcore" packge, and some extra "xml" package you still need to get
anything done...

Mark.



From prescod@prescod.net  Fri Sep 15 00:25:38 2000
From: prescod@prescod.net (Paul)
Date: Thu, 14 Sep 2000 18:25:38 -0500 (CDT)
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com>

On Thu, 14 Sep 2000, Guido van Rossum wrote:

> > In the face of all of this confusion the safest thing would be to make
> > [...] illegal and then figure it out for Python 2.1. 
> 
> Taking out controversial features is a good idea in some cases, in
> order to prevent likely disasters.
> 
> I've heard that the xml support in 2.0b1 is broken, and that it's not
> clear that it will be possible to fix it in time (the 2.0b1 release is
> due in two weeks).  The best thing here seems to remove it and put it
> back in 2.1 (due 3-6 months after 2.0).  In the mean time, the XML-sig
> can release its own version.

I've been productively using the 2.0 XML package. There are three main
modules in there: Expat -- which I believe is fine, SAX -- which is not
finished, and minidom -- which has a couple of very minor known bugs
relating to standards conformance.

If you are asking whether SAX can be fixed in time then the answer is "I
think so but it is out of my hands."  I contributed fixes to SAX this
morning and the remaining known issues are design issues. I'm not the
designer. If I were the designer I'd call it done, make a test suite and
go home.

Whether or not it is finished, I see no reason to hold up either minidom
or expat. There have been very few complaints about either.

> The way I understand the situation right now is that there are two
> packages claiming the name xml; one in the 2.0 core and one released
> by the XML-sig.  While the original intent was for the XML-sig package
> to be a superset of the core package, this doesn't appear to be
> currently the case, even if the brokenness of the core xml package can
> be fixed.

That's true. Martin V. Loewis has promised to look into this situation for
us. I believe he has a good understanding of the issues.

> We absolutely cannot have a situation where there could be two
> applications, one working only with the xml-sig's xml package, and the
> other only with the 2.0 core xml package.  If at least one direction
> of compatibility cannot be guaranteed, I propose that one of the
> packages be renamed.  We can either rename the xml package to be
> released with Python 2.0 to xmlcore, or we can rename the xml-sig's
> xml package to xmlsig (or whatever they like).  (Then when in 2.1 the
> issue is resolved, we can rename the compatible solution back to xml.)
> 
> Given that the xml-sig already has released packages called xml, the
> best solution (and one which doesn't require the cooperation of the
> xml-sig!) is to rename the 2.0 core xml package to xmlcore.

I think it would be unfortunate if the Python xml processing package be
named xmlcore for eternity. The whole point of putting it in the core is
that it should become more popular and ubiquitous than an add-on module.

I'd rather see Martin given an opportunity to look into it. If he hasn't
made progress in a week then we can rename one or the other.

 Paul




From prescod@prescod.net  Fri Sep 15 00:53:15 2000
From: prescod@prescod.net (Paul)
Date: Thu, 14 Sep 2000 18:53:15 -0500 (CDT)
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEGHDJAA.MarkH@ActiveState.com>
Message-ID: <Pine.LNX.4.21.0009141829330.25261-100000@amati.techno.com>

On Fri, 15 Sep 2000, Mark Hammond wrote:

> [Andrew]
> > I'd be more inclined to blame the XML-SIG package;
> 
> Definately.  This XML stuff has cost me a number of hours a number of
> times!  Always with other people's code, so I didnt know where to turn.

The XML SIG package is unstable. It's a grab bag. It's the cool stuff
people have been working on. I've said about a hundred times that it will
never get to version 1, will never be stable, will never be reliable
because that isn't how anyone views it. I don't see it as a flaw: it's the
place you go for cutting edge XML stuff. That's why Andrew and Guido are
dead wrong that we don't need Python as a package in the core. That's
where the stable stuff goes. Expat and Minidom are stable. IIRC, their
APIs have only changed in minor ways in the last year.

> What is going on here?  We are forced to rename a core package, largely to
> avoid the cooperation of, and avoid conflicting with, a SIG explicitly
> setup to develop this core package in the first place!!!
> 
> How did this happen?  Does the XML SIG need to be shut down (while it still
> can <wink>)?

It's not that anybody is not cooperating. Its that there are a small
number of people doing the actual work and they drop in and out of
availability based on their real life jobs. It isn't always, er, polite to
tell someone "get out of the way I'll do it myself." Despite the fact that
all the nasty hints are being dropped in my direction, nobody exercises a
BDFL position in the XML SIG. There's the central issue. Nobody imposes
deadlines, nobody says what features should go in or shouldn't and in what
form. If I tried to do so I would be rightfully slapped down.

> But IMO, the Python core has first grab at the name "xml" - if we can't get
> the cooperation of the SIG, it should be their problem.  Where do we want
> to be with respect to XML in a few years?  Surely not with some half-assed
> "xmlcore" packge, and some extra "xml" package you still need to get
> anything done...

It's easy to say that the core is important and the sig package is
secondary but 

 a) Guido says that they are both important
 b) The sig package has some users (at least a few) with running code

Nevertheless, I agree with you that in the long term we will wish we had
just used the name "xml" for the core package. I'm just pointing out that
it isn't as simple as it looks when you aren't involved.

 Paul Prescod



From prescod@prescod.net  Fri Sep 15 01:12:28 2000
From: prescod@prescod.net (Paul)
Date: Thu, 14 Sep 2000 19:12:28 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009142215.RAA07332@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com>

On Thu, 14 Sep 2000, Guido van Rossum wrote:
> ...
>
> Paul, I don't see why you keep whining about this. ...
> ...
> 
> If this had been in the language from day one nobody would have
> challenged it.  (And I've used my time machine to prove it, so don't
> argue. :-)

Well I still dislike "print" and map( None, ...) but yes, the societal bar
is much higher for change than for status quo. That's how the world works.

> If you believe I should no longer be the BDFL, say so, but please keep
> it out of python-dev.  We're trying to get work done here.  You're an
> employee of a valued member of the Python Consortium.  As such you can
> request (through your boss) to be subscribed to the Consortium mailing
> list.  Feel free to bring this up there -- there's not much else going
> on there.

What message are you replying to?

According to the archives, I've sent four messages since the beginning of
September. None of them suggest you are doing a bad job as BDFL (other
than being wrong on this particular issue).

 Paul Prescod




From trentm@ActiveState.com  Fri Sep 15 01:20:45 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 14 Sep 2000 17:20:45 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src configure.in,1.156,1.157 configure,1.146,1.147 config.h.in,2.72,2.73
In-Reply-To: <200009141547.IAA14881@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Thu, Sep 14, 2000 at 08:47:10AM -0700
References: <200009141547.IAA14881@slayer.i.sourceforge.net>
Message-ID: <20000914172045.E3038@ActiveState.com>

On Thu, Sep 14, 2000 at 08:47:10AM -0700, Fred L. Drake wrote:
> Update of /cvsroot/python/python/dist/src
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv14790
> 
> Modified Files:
> 	configure.in configure config.h.in 
> Log Message:
> 
> Allow configure to detect whether ndbm.h or gdbm/ndbm.h is installed.
> This allows dbmmodule.c to use either without having to add additional
> options to the Modules/Setup file or make source changes.
> 
> (At least some Linux systems use gdbm to emulate ndbm, but only install
> the ndbm.h header as /usr/include/gdbm/ndbm.h.)
>
> Index: configure.in
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/configure.in,v
> retrieving revision 1.156
> retrieving revision 1.157
> diff -C2 -r1.156 -r1.157
> *** configure.in	2000/09/08 02:17:14	1.156
> --- configure.in	2000/09/14 15:47:04	1.157
> ***************
> *** 372,376 ****
>   sys/audioio.h sys/file.h sys/lock.h db_185.h db.h \
>   sys/param.h sys/select.h sys/socket.h sys/time.h sys/times.h \
> ! sys/un.h sys/utsname.h sys/wait.h pty.h libutil.h)
>   AC_HEADER_DIRENT
>   
> --- 372,376 ----
>   sys/audioio.h sys/file.h sys/lock.h db_185.h db.h \
>   sys/param.h sys/select.h sys/socket.h sys/time.h sys/times.h \
> ! sys/un.h sys/utsname.h sys/wait.h pty.h libutil.h ndbm.h gdbm/ndbm.h)
>   AC_HEADER_DIRENT

Is this the correct fix? Previously I had been compiling the dbmmodule on
Debain and RedHat boxes using /usr/include/db1/ndbm.h (I had to change the
Setup.in line to include this directory. Now the configure test says that
ndbm.h does not exist and this patch (see below) to dbmmodule.c now won't
compile.



> Index: dbmmodule.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Modules/dbmmodule.c,v
> retrieving revision 2.22
> retrieving revision 2.23
> diff -C2 -r2.22 -r2.23
> *** dbmmodule.c   2000/09/01 23:29:26 2.22
> --- dbmmodule.c   2000/09/14 15:48:06 2.23
> ***************
> *** 8,12 ****
> --- 8,22 ----
>   #include <sys/stat.h>
>   #include <fcntl.h>
> +
> + /* Some Linux systems install gdbm/ndbm.h, but not ndbm.h.  This supports
> +  * whichever configure was able to locate.
> +  */
> + #if defined(HAVE_NDBM_H)
>   #include <ndbm.h>
> + #elif defined(HAVE_GDBM_NDBM_H)
> + #include <gdbm/ndbm.h>
> + #else
> + #error "No ndbm.h available!"
> + #endif
>
>   typedef struct {


-- 
Trent Mick
TrentM@ActiveState.com


From akuchlin@mems-exchange.org  Fri Sep 15 03:05:40 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 14 Sep 2000 22:05:40 -0400
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142314.SAA08092@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 14, 2000 at 06:14:52PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com> <20000914174719.A29499@kronos.cnri.reston.va.us> <200009142314.SAA08092@cj20424-a.reston1.va.home.com>
Message-ID: <20000914220540.A26196@newcnri.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 06:14:52PM -0500, Guido van Rossum wrote:
>It's easy to blame.  (Aren't you responsible for the XML-SIG releases? :-)

Correct; I wouldn't presume to flagellate someone else.

>I can't say that I wanted the xml package either -- I thought that the
>XML-SIG wanted it, and insisted that it be called 'xml', conflicting
>with their own offering.  I'm not part of that group, and have no time

Most of the XML-SIG does want it; I'm just not one of them.

--amk


From petrilli@amber.org  Fri Sep 15 03:29:35 2000
From: petrilli@amber.org (Christopher Petrilli)
Date: Thu, 14 Sep 2000 22:29:35 -0400
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
In-Reply-To: <14785.21560.61961.86040@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Sep 14, 2000 at 06:42:00PM -0400
References: <000701c01e99$d0fac9a0$766940d5@hagrid> <14785.21560.61961.86040@bitdiddle.concentric.net>
Message-ID: <20000914222935.A16149@trump.amber.org>

Jeremy Hylton [jeremy@beopen.com] wrote:
> I like Python plenty, but Emacs is my favorite operating system.

M-% operating system RET religion RET !

:-)
Chris
-- 
| Christopher Petrilli
| petrilli@amber.org


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Sep 15 12:06:44 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 15 Sep 2000 14:06:44 +0300 (IDT)
Subject: [Python-Dev] Vacation
Message-ID: <Pine.GSO.4.10.10009151403560.23713-100000@sundial>

I'm going to be away from my e-mail from the 16th to the 23rd as I'm going
to be vacationing in the Netherlands. Please do not count on me to do
anything that needs to be done until the 24th. I currently have two
patches assigned to me which should be considered before b2, so if b2 is
before the 24th, please assign them to someone else.

Thanks in advance.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From guido@beopen.com  Fri Sep 15 13:40:52 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 07:40:52 -0500
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 18:25:38 EST."
 <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com>
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com>
Message-ID: <200009151240.HAA09833@cj20424-a.reston1.va.home.com>

[me]
> > Given that the xml-sig already has released packages called xml, the
> > best solution (and one which doesn't require the cooperation of the
> > xml-sig!) is to rename the 2.0 core xml package to xmlcore.
> 
> I think it would be unfortunate if the Python xml processing package be
> named xmlcore for eternity. The whole point of putting it in the core is
> that it should become more popular and ubiquitous than an add-on module.

I'm not proposing that it be called xmlcore for eternity, but I see a
*practical* problem with the 2.0 release: the xml-sig has a package
called 'xml' (and they've had dibs on the name for years!) which is
incompatible.  We can't force them to issue a new release under a
different name.  I don't want to break other people's code that
requires the xml-sig's xml package.

I propose the following:

We remove the '_xmlplus' feature.  It seems better not to rely on the
xml-sig to provide upgrades to the core xml package.  We're planning
2.1, 2.2, ... releases 3-6 months apart which should be quick enough
for most upgrade needs; we can issue service packs in between if
necessary.

*IF* (and that's still a big "if"!) the xml core support is stable
before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
stable, we remove it, but we'll consider it for 2.1.

In 2.1, presuming the XML-sig has released its own package under a
different name, we'll rename 'xmlcore' to 'xml' (keeping 'xmlcore' as
a backwards compatibility feature until 2.2).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Sep 15 13:46:30 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 07:46:30 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Thu, 14 Sep 2000 19:12:28 EST."
 <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com>
References: <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com>
Message-ID: <200009151246.HAA09902@cj20424-a.reston1.va.home.com>

> Well I still dislike "print" and map( None, ...) but yes, the societal bar
> is much higher for change than for status quo. That's how the world works.

Thanks.  You're getting over it just fine.  Don't worry!

> > If you believe I should no longer be the BDFL, say so, but please keep
> > it out of python-dev.  We're trying to get work done here.  You're an
> > employee of a valued member of the Python Consortium.  As such you can
> > request (through your boss) to be subscribed to the Consortium mailing
> > list.  Feel free to bring this up there -- there's not much else going
> > on there.
> 
> What message are you replying to?
> 
> According to the archives, I've sent four messages since the beginning of
> September. None of them suggest you are doing a bad job as BDFL (other
> than being wrong on this particular issue).

My apologies.  It must have been Vladimir's.  I was on the phone and
in meetings for most of the day and saw a whole slew of messages about
this issue.  Let's put this to rest -- I still have 50 more messages
to skim.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas.heller@ion-tof.com  Fri Sep 15 16:05:22 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Fri, 15 Sep 2000 17:05:22 +0200
Subject: [Python-Dev] Bug in 1.6 and 2.0b1 re?
Message-ID: <032a01c01f26$624a7900$4500a8c0@thomasnb>

[I posted this to the distutils mailing list, but have not yet
received an answer]

> This may not be directly related to distutils,
> it may also be a bug in 1.6 and 2.0b1 re implementation.
> 
> 'setup.py sdist' with the current distutils CVS version
> hangs while parsing MANIFEST.in,
> executing the re.sub command in these lines in text_file.py:
> 
>         # collapse internal whitespace (*after* joining lines!)
>         if self.collapse_ws:
>             line = re.sub (r'(\S)\s+(\S)', r'\1 \2', line)
> 
> 
> Has anyone else noticed this, or is something wrong on my side?
> 

[And a similar problem has been posted to c.l.p by vio]

> I believe there may be a RE bug in 2.0b1. Consider the following script:
> 
> #!/usr/bin/env python
> import re
> s = "red green blue"
> m = re.compile(r'green (\w+)', re.IGNORECASE)
> t = re.subn(m, r'matchedword \1 blah', s)
> print t
> 
> 
> When I run this on 1.5.2, I get the following expected output:
> 
> ('red matchedword blue blah', 1)
> 
> 
> If I run it on 2.0b1, python basically hangs.
> 

Thomas



From guido@beopen.com  Fri Sep 15 17:24:47 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 11:24:47 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: Your message of "Fri, 15 Sep 2000 08:14:54 MST."
 <200009151514.IAA26707@slayer.i.sourceforge.net>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
Message-ID: <200009151624.LAA10888@cj20424-a.reston1.va.home.com>

> --- 578,624 ----
>   
>       def load_string(self):
> !         rep = self.readline()[:-1]
> !         if not self._is_string_secure(rep):
> !             raise ValueError, "insecure string pickle"
> !         self.append(eval(rep,
>                            {'__builtins__': {}})) # Let's be careful
>       dispatch[STRING] = load_string
> + 
> +     def _is_string_secure(self, s):
> +         """Return true if s contains a string that is safe to eval
> + 
> +         The definition of secure string is based on the implementation
> +         in cPickle.  s is secure as long as it only contains a quoted
> +         string and optional trailing whitespace.
> +         """
> +         q = s[0]
> +         if q not in ("'", '"'):
> +             return 0
> +         # find the closing quote
> +         offset = 1
> +         i = None
> +         while 1:
> +             try:
> +                 i = s.index(q, offset)
> +             except ValueError:
> +                 # if there is an error the first time, there is no
> +                 # close quote
> +                 if offset == 1:
> +                     return 0
> +             if s[i-1] != '\\':
> +                 break
> +             # check to see if this one is escaped
> +             nslash = 0
> +             j = i - 1
> +             while j >= offset and s[j] == '\\':
> +                 j = j - 1
> +                 nslash = nslash + 1
> +             if nslash % 2 == 0:
> +                 break
> +             offset = i + 1
> +         for c in s[i+1:]:
> +             if ord(c) > 32:
> +                 return 0
> +         return 1
>   
>       def load_binstring(self):

Hm...  This seems to add a lot of work to a very common item in
pickles.

I had a different idea on how to make this safe from abuse: pass eval
a globals dict with an empty __builtins__ dict, as follows:
{'__builtins__': {}}.

Have you timed it?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Sep 15 17:29:40 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 11:29:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: Your message of "Fri, 15 Sep 2000 11:24:47 EST."
 <200009151624.LAA10888@cj20424-a.reston1.va.home.com>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
 <200009151624.LAA10888@cj20424-a.reston1.va.home.com>
Message-ID: <200009151629.LAA10956@cj20424-a.reston1.va.home.com>

[I wrote]
> Hm...  This seems to add a lot of work to a very common item in
> pickles.
> 
> I had a different idea on how to make this safe from abuse: pass eval
> a globals dict with an empty __builtins__ dict, as follows:
> {'__builtins__': {}}.

I forgot that this is already how it's done.  But my point remains:
who says that this can cause security violations?  Sure, it can cause
unpickling to fail with an exception -- so can tons of other invalid
pickles.  But is it a security violation?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From trentm@ActiveState.com  Fri Sep 15 16:30:28 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 15 Sep 2000 08:30:28 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules structmodule.c,2.38,2.39
In-Reply-To: <200009150732.AAA08842@slayer.i.sourceforge.net>; from loewis@users.sourceforge.net on Fri, Sep 15, 2000 at 12:32:01AM -0700
References: <200009150732.AAA08842@slayer.i.sourceforge.net>
Message-ID: <20000915083028.D30529@ActiveState.com>

On Fri, Sep 15, 2000 at 12:32:01AM -0700, Martin v. Löwis wrote:
> Modified Files:
> 	structmodule.c 
> Log Message:
> Check range for bytes and shorts. Closes bug #110845.
> 
> 
> + 	if (x < -32768 || x > 32767){
> + 		PyErr_SetString(StructError,
> + 				"short format requires -32768<=number<=32767");
> + 		return -1;
> + 	}

Would it not be cleaner to use SHRT_MIN and SHRT_MAX (from limits.h I think)
here?

> + 	if (x < 0 || x > 65535){
> + 		PyErr_SetString(StructError,
> + 				"short format requires 0<=number<=65535");
> + 		return -1;
> + 	}
> + 	* (unsigned short *)p = (unsigned short)x;

And USHRT_MIN and USHRT_MAX here?


No biggie though.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Fri Sep 15 16:35:19 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 15 Sep 2000 08:35:19 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules structmodule.c,2.38,2.39
In-Reply-To: <20000915083028.D30529@ActiveState.com>; from trentm@ActiveState.com on Fri, Sep 15, 2000 at 08:30:28AM -0700
References: <200009150732.AAA08842@slayer.i.sourceforge.net> <20000915083028.D30529@ActiveState.com>
Message-ID: <20000915083519.E30529@ActiveState.com>

On Fri, Sep 15, 2000 at 08:30:28AM -0700, Trent Mick wrote:
> On Fri, Sep 15, 2000 at 12:32:01AM -0700, Martin v. Löwis wrote:
> > Modified Files:
> > 	structmodule.c 
> > Log Message:
> > Check range for bytes and shorts. Closes bug #110845.
> > 
> > 
> > + 	if (x < -32768 || x > 32767){
> > + 		PyErr_SetString(StructError,
> > + 				"short format requires -32768<=number<=32767");
> > + 		return -1;
> > + 	}
> 
> Would it not be cleaner to use SHRT_MIN and SHRT_MAX (from limits.h I think)
> here?
> 
> > + 	if (x < 0 || x > 65535){
> > + 		PyErr_SetString(StructError,
> > + 				"short format requires 0<=number<=65535");
> > + 		return -1;
> > + 	}
> > + 	* (unsigned short *)p = (unsigned short)x;
> 
> And USHRT_MIN and USHRT_MAX here?
> 


Heh, heh. I jump a bit quickly on that one. Three checkin messages later this
suggestion was applied. :) SOrry about that, Martin.


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From paul@prescod.net  Fri Sep 15 17:02:40 2000
From: paul@prescod.net (Paul Prescod)
Date: Fri, 15 Sep 2000 09:02:40 -0700
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> <200009151240.HAA09833@cj20424-a.reston1.va.home.com>
Message-ID: <39C24820.FB951E80@prescod.net>

Guido van Rossum wrote:
> 
> ...
> 
> I'm not proposing that it be called xmlcore for eternity, but I see a
> *practical* problem with the 2.0 release: the xml-sig has a package
> called 'xml' (and they've had dibs on the name for years!) which is
> incompatible.  We can't force them to issue a new release under a
> different name.  I don't want to break other people's code that
> requires the xml-sig's xml package.

Martin v. Loewis, Greg Stein and others think that they have a
backwards-compatible solution. You can decide whether to let Martin try
versus go the "xmlcore" route, or else you could delegate that decision
(to someone in particular, please!).

> I propose the following:
> 
> We remove the '_xmlplus' feature.  It seems better not to rely on the
> xml-sig to provide upgrades to the core xml package.  We're planning
> 2.1, 2.2, ... releases 3-6 months apart which should be quick enough
> for most upgrade needs; we can issue service packs in between if
> necessary.

I could live with this proposal but it isn't my decision. Are you
instructing the SIG to do this? Or are you suggesting I go back to the
SIG and start a discussion on it? What decision making procedure do you
advocate? Who is supposed to make this decision?

> *IF* (and that's still a big "if"!) the xml core support is stable
> before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
> stable, we remove it, but we'll consider it for 2.1.

We can easily have something stable within a few days from now. In fact,
all reported bugs are already fixed in patches that I will check in
today. There are no hard technical issues here.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html


From guido@beopen.com  Fri Sep 15 18:12:31 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 12:12:31 -0500
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Fri, 15 Sep 2000 09:02:40 MST."
 <39C24820.FB951E80@prescod.net>
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> <200009151240.HAA09833@cj20424-a.reston1.va.home.com>
 <39C24820.FB951E80@prescod.net>
Message-ID: <200009151712.MAA13107@cj20424-a.reston1.va.home.com>

[me]
> > I'm not proposing that it be called xmlcore for eternity, but I see a
> > *practical* problem with the 2.0 release: the xml-sig has a package
> > called 'xml' (and they've had dibs on the name for years!) which is
> > incompatible.  We can't force them to issue a new release under a
> > different name.  I don't want to break other people's code that
> > requires the xml-sig's xml package.

[Paul]
> Martin v. Loewis, Greg Stein and others think that they have a
> backwards-compatible solution. You can decide whether to let Martin try
> versus go the "xmlcore" route, or else you could delegate that decision
> (to someone in particular, please!).

I will make the decision based on information gathered by Fred Drake.
You, Martin, Greg Stein and others have to get the information to him.

> > I propose the following:
> > 
> > We remove the '_xmlplus' feature.  It seems better not to rely on the
> > xml-sig to provide upgrades to the core xml package.  We're planning
> > 2.1, 2.2, ... releases 3-6 months apart which should be quick enough
> > for most upgrade needs; we can issue service packs in between if
> > necessary.
> 
> I could live with this proposal but it isn't my decision. Are you
> instructing the SIG to do this? Or are you suggesting I go back to the
> SIG and start a discussion on it? What decision making procedure do you
> advocate? Who is supposed to make this decision?

I feel that the XML-SIG isn't ready for action, so I'm making it easy
for them: they don't have to do anything.  Their package is called
'xml'.  The core package will be called something else.

> > *IF* (and that's still a big "if"!) the xml core support is stable
> > before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
> > stable, we remove it, but we'll consider it for 2.1.
> 
> We can easily have something stable within a few days from now. In fact,
> all reported bugs are already fixed in patches that I will check in
> today. There are no hard technical issues here.

Thanks.  This is a great help!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Fri Sep 15 17:54:17 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 15 Sep 2000 12:54:17 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: <200009151624.LAA10888@cj20424-a.reston1.va.home.com>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
 <200009151624.LAA10888@cj20424-a.reston1.va.home.com>
Message-ID: <14786.21561.493632.580653@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

  GvR> Hm...  This seems to add a lot of work to a very common item in
  GvR> pickles.

  GvR> I had a different idea on how to make this safe from abuse:
  GvR> pass eval a globals dict with an empty __builtins__ dict, as
  GvR> follows: {'__builtins__': {}}.

  GvR> Have you timed it?

I just timed it with a few test cases, using strings from
/dev/urandom. 

1. pickle dictionary with 25 items, 10-byte keys, 20-bytes values
   0.1% slowdown

2. pickle dictionary with 25 items, 15-byte keys, 100-byte values
   1.5% slowdown

3. pickle 8k string
   0.6% slowdown

The performance impact seems minimal.  And, of course, pickle is
already incredibly slow compared to cPickle.

So it isn't slow, but is it necessary?  I didn't give it much thought;
merely saw the cPickle did these checks in addition to calling eval
with an empty builtins dict.

Jim-- Is there a reason you added the "insecure string pickle"
feature?

I can't think of anything in particular that would go wrong other than
bizarre exceptions, e.g. OverflowError, SyntaxError, etc.  It would be
possible to construct pickles that produced unexpected objects, like
an instance with an attribute that is an integer

    >>> x
    <__main__.Foo instance at 0x8140acc>
    >>> dir(x)
    [3, 'attr']

But there are so many other ways to produce weird objects using pickle
that this particular one does not seem to matter.

The only arguments I'm left with, which don't seem particularly
compelling, are:

1. Simplifies error checking for client, which can catch ValueError
   instead of multiplicity of errors
2. Compatibility with cPickle interface

Barring better ideas from Jim Fulton, it sounds like we should
probably remove the checks from both picklers.

Jeremy


From jeremy@beopen.com  Fri Sep 15 18:04:10 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 15 Sep 2000 13:04:10 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: <14786.21561.493632.580653@bitdiddle.concentric.net>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
 <200009151624.LAA10888@cj20424-a.reston1.va.home.com>
 <14786.21561.493632.580653@bitdiddle.concentric.net>
Message-ID: <14786.22154.794230.895070@bitdiddle.concentric.net>

I should have checked the revision history on cPickle before the last
post.  It says:

> revision 2.16
> date: 1997/12/08 15:15:16;  author: guido;  state: Exp;  lines: +50 -24
> Jim Fulton:
> 
>         - Loading non-binary string pickles checks for insecure
>           strings. This is needed because cPickle (still)
>           uses a restricted eval to parse non-binary string pickles.
>           This change is needed to prevent untrusted
>           pickles like::
> 
>             "S'hello world'*2000000\012p0\012."
> 
>           from hosing an application.
> 

So the justification seems to be that an attacker could easily consume
a lot of memory on a system and bog down an application if eval is
used to load the strings.  I imagine there are other ways to cause
trouble, but I don't see much harm in preventing this particular one.

Trying running this with the old pickle.  It locked my system up for a
good 30 seconds :-)

x = pickle.loads("S'hello world'*20000000\012p0\012.")

Jeremy


From jeremy@beopen.com  Fri Sep 15 23:27:15 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: 15 Sep 2000 18:27:15 -0400
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
Message-ID: <blhf7h1ebg.fsf@bitdiddle.concentric.net>

I was just reading comp.lang.python and saw an interesting question
that I couldn't answer.  Is anyone here game?

Jeremy
------- Start of forwarded message -------
From: Donn Cave <donn@u.washington.edu>
Newsgroups: comp.lang.python
Subject: sys.setdefaultencoding (2.0b1)
Date: 12 Sep 2000 22:11:31 GMT
Organization: University of Washington
Message-ID: <8pm9mj$3ie2$1@nntp6.u.washington.edu>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1

I see codecs.c has gone to some trouble to defer character encoding
setup until it's actually required for something, but it's required
rather early in the process anyway when site.py calls
sys.setdefaultencoding("ascii")

If I strike that line from site.py, startup time goes down by about
a third.

Is that too simple a fix?  Does setdefaultencoding("ascii") do something
important?

	Donn Cave, donn@u.washington.edu
------- End of forwarded message -------


From guido@beopen.com  Sat Sep 16 00:31:52 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 18:31:52 -0500
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
In-Reply-To: Your message of "15 Sep 2000 18:27:15 -0400."
 <blhf7h1ebg.fsf@bitdiddle.concentric.net>
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net>
Message-ID: <200009152331.SAA01300@cj20424-a.reston1.va.home.com>

> I was just reading comp.lang.python and saw an interesting question
> that I couldn't answer.  Is anyone here game?

From reading the source code for unicodeobject.c, _PyUnicode_Init()
sets the default to "ascii" anyway, to the call in site.py is quite
unnecessary.  I think it's a good idea to remove it.  (Look around
though -- there are some "if 0:" blocks that could make it necessary.
Maybe the setdefaultencoding() call should be inside an "if 0:" block
too.  With a comment.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


> Jeremy
> ------- Start of forwarded message -------
> From: Donn Cave <donn@u.washington.edu>
> Newsgroups: comp.lang.python
> Subject: sys.setdefaultencoding (2.0b1)
> Date: 12 Sep 2000 22:11:31 GMT
> Organization: University of Washington
> Message-ID: <8pm9mj$3ie2$1@nntp6.u.washington.edu>
> Mime-Version: 1.0
> Content-Type: text/plain; charset=ISO-8859-1
> 
> I see codecs.c has gone to some trouble to defer character encoding
> setup until it's actually required for something, but it's required
> rather early in the process anyway when site.py calls
> sys.setdefaultencoding("ascii")
> 
> If I strike that line from site.py, startup time goes down by about
> a third.
> 
> Is that too simple a fix?  Does setdefaultencoding("ascii") do something
> important?
> 
> 	Donn Cave, donn@u.washington.edu
> ------- End of forwarded message -------
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev



From nascheme@enme.ucalgary.ca  Fri Sep 15 23:36:14 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 15 Sep 2000 16:36:14 -0600
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
In-Reply-To: <200009152331.SAA01300@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Fri, Sep 15, 2000 at 06:31:52PM -0500
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> <200009152331.SAA01300@cj20424-a.reston1.va.home.com>
Message-ID: <20000915163614.A7376@keymaster.enme.ucalgary.ca>

While we're optimizing the startup time, how about lazying loading the
LICENSE.txt file?

  Neil


From akuchlin@mems-exchange.org  Sat Sep 16 02:10:30 2000
From: akuchlin@mems-exchange.org (A.M. Kuchling)
Date: Fri, 15 Sep 2000 21:10:30 -0400
Subject: [Python-Dev] Problem with using _xmlplus
Message-ID: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>

The code in Lib/xml/__init__.py seems to be insufficient to completely
delegate matters to the _xmlplus package.  Consider this session with
'python -v':

Script started on Fri Sep 15 21:02:59 2000
[amk@207-172-111-249 quotations]$ python -v
  ...
>>> from xml.sax import saxlib, saxexts
import xml # directory /usr/lib/python2.0/xml
import xml # precompiled from /usr/lib/python2.0/xml/__init__.pyc
import _xmlplus # directory /usr/lib/python2.0/site-packages/_xmlplus
import _xmlplus # from /usr/lib/python2.0/site-packages/_xmlplus/__init__.py
import xml.sax # directory /usr/lib/python2.0/site-packages/_xmlplus/sax
import xml.sax # from /usr/lib/python2.0/site-packages/_xmlplus/sax/__init__.py
import xml.sax.saxlib # from /usr/lib/python2.0/site-packages/_xmlplus/sax/saxlib.py
import xml.sax.saxexts # from /usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py
import imp # builtin

So far, so good.  Now try creating a parser.  This fails; I've hacked
the code slightly so it doesn't swallow the responsible ImportError:

>>> p=saxexts.XMLParserFactory.make_parser("xml.sax.drivers.drv_pyexpat")
import xml # directory /usr/lib/python2.0/xml
import xml # precompiled from /usr/lib/python2.0/xml/__init__.pyc
import sax # directory /usr/lib/python2.0/xml/sax
import sax # precompiled from /usr/lib/python2.0/xml/sax/__init__.pyc
import sax.handler # precompiled from /usr/lib/python2.0/xml/sax/handler.pyc
import sax.expatreader # precompiled from /usr/lib/python2.0/xml/sax/expatreader.pyc
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py", line 78, in make_parser
    info=rec_find_module(parser_name)
  File "/usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py", line 25, in rec_find_module
    lastmod=apply(imp.load_module,info)
  File "/usr/lib/python2.0/xml/sax/__init__.py", line 21, in ?
    from expatreader import ExpatParser
  File "/usr/lib/python2.0/xml/sax/expatreader.py", line 23, in ?
    from xml.sax import xmlreader
ImportError: cannot import name xmlreader

_xmlplus.sax.saxexts uses imp.find_module() and imp.load_module() to
load parser drives; it looks like those functions aren't looking at
sys.modules and therefore aren't being fooled by the sys.modules
hackery in Lib/xml/__init__.py, so the _xmlplus package isn't
completely overriding the xml/ package.

The guts of Python's import machinery have always been mysterious to
me; can anyone suggest how to fix this?

--amk


From guido@beopen.com  Sat Sep 16 03:06:28 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 21:06:28 -0500
Subject: [Python-Dev] Problem with using _xmlplus
In-Reply-To: Your message of "Fri, 15 Sep 2000 21:10:30 -0400."
 <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
Message-ID: <200009160206.VAA09344@cj20424-a.reston1.va.home.com>

[Andrew discovers that the _xmlplus hack is broken]

I have recently proposed a simple and robust fix: forget all import
hacking, and use a different name for the xml package in the core and
the xml package provided by PyXML.  I first suggested the name
'xmlcore' for the core xml package, but Martin von Loewis suggested a
better name: 'xmlbase'.

Since PyXML has had dibs on the 'xml' package name for years, it's
best not to try to change that.  We can't force everyone who has
installed an old version of PyXML to upgrade (and to erase the old
package!) so the best solution is to pick a new name for the core XML
support package.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Sat Sep 16 07:24:41 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 16 Sep 2000 08:24:41 +0200
Subject: [Python-Dev] Re: [XML-SIG] Problem with using _xmlplus
In-Reply-To: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
 (amk1@erols.com)
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
Message-ID: <200009160624.IAA00804@loewis.home.cs.tu-berlin.de>

> The guts of Python's import machinery have always been mysterious to
> me; can anyone suggest how to fix this?

I had a patch for some time on SF, waiting for approval,
(http://sourceforge.net/patch/?func=detailpatch&patch_id=101444&group_id=6473)
to fix that; I have now installed that patch.

Regards,
Martin


From larsga@garshol.priv.no  Sat Sep 16 11:26:34 2000
From: larsga@garshol.priv.no (Lars Marius Garshol)
Date: 16 Sep 2000 12:26:34 +0200
Subject: [XML-SIG] Re: [Python-Dev] Problem with using _xmlplus
In-Reply-To: <200009160206.VAA09344@cj20424-a.reston1.va.home.com>
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com> <200009160206.VAA09344@cj20424-a.reston1.va.home.com>
Message-ID: <m3lmwsy6n9.fsf@lambda.garshol.priv.no>

* Guido van Rossum
| 
| [suggests: the XML package in the Python core 'xmlbase']
| 
| Since PyXML has had dibs on the 'xml' package name for years, it's
| best not to try to change that.  We can't force everyone who has
| installed an old version of PyXML to upgrade (and to erase the old
| package!) so the best solution is to pick a new name for the core
| XML support package.

For what it's worth: I like this approach very much. It's simple,
intuitive and not likely to cause any problems.

--Lars M.



From mal@lemburg.com  Sat Sep 16 19:19:59 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 16 Sep 2000 20:19:59 +0200
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> <200009152331.SAA01300@cj20424-a.reston1.va.home.com>
Message-ID: <39C3B9CF.51441D94@lemburg.com>

Guido van Rossum wrote:
> 
> > I was just reading comp.lang.python and saw an interesting question
> > that I couldn't answer.  Is anyone here game?
> 
> >From reading the source code for unicodeobject.c, _PyUnicode_Init()
> sets the default to "ascii" anyway, to the call in site.py is quite
> unnecessary.  I think it's a good idea to remove it.  (Look around
> though -- there are some "if 0:" blocks that could make it necessary.
> Maybe the setdefaultencoding() call should be inside an "if 0:" block
> too.  With a comment.

Agreed. I'll fix this next week.

Some background: the first codec lookup done causes the encodings
package to be loaded which then registers the encodings package
codec search function. Then the 'ascii' codec is looked up
via the codec registry. All this takes time and should only
be done in case the code really uses codecs... (at least that
was the idea).

> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> > Jeremy
> > ------- Start of forwarded message -------
> > From: Donn Cave <donn@u.washington.edu>
> > Newsgroups: comp.lang.python
> > Subject: sys.setdefaultencoding (2.0b1)
> > Date: 12 Sep 2000 22:11:31 GMT
> > Organization: University of Washington
> > Message-ID: <8pm9mj$3ie2$1@nntp6.u.washington.edu>
> > Mime-Version: 1.0
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > I see codecs.c has gone to some trouble to defer character encoding
> > setup until it's actually required for something, but it's required
> > rather early in the process anyway when site.py calls
> > sys.setdefaultencoding("ascii")
> >
> > If I strike that line from site.py, startup time goes down by about
> > a third.
> >
> > Is that too simple a fix?  Does setdefaultencoding("ascii") do something
> > important?
> >
> >       Donn Cave, donn@u.washington.edu
> > ------- End of forwarded message -------
> >
> > _______________________________________________
> > Python-Dev mailing list
> > Python-Dev@python.org
> > http://www.python.org/mailman/listinfo/python-dev
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
________________________________________________________________________
Business:                                        http://www.lemburg.com/
Python Pages:                             http://www.lemburg.com/python/


From fdrake@beopen.com  Sat Sep 16 23:10:19 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Sat, 16 Sep 2000 18:10:19 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.13,1.14
In-Reply-To: <200009162201.PAA21016@slayer.i.sourceforge.net>
References: <200009162201.PAA21016@slayer.i.sourceforge.net>
Message-ID: <14787.61387.996949.986311@cj42289-a.reston1.va.home.com>

Barry Warsaw writes:
 > Added request for cStringIO.StringIO.readlines() method.  Closes SF
 > bug #110686.

  I think the Patch Manager has a patch for this one, but I don't know
if its any good.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member


From bwarsaw@beopen.com  Sat Sep 16 23:38:46 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Sat, 16 Sep 2000 18:38:46 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.13,1.14
References: <200009162201.PAA21016@slayer.i.sourceforge.net>
 <14787.61387.996949.986311@cj42289-a.reston1.va.home.com>
Message-ID: <14787.63094.667182.915703@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake@beopen.com> writes:

    >> Added request for cStringIO.StringIO.readlines() method.
    >> Closes SF bug #110686.

    Fred>   I think the Patch Manager has a patch for this one, but I
    Fred> don't know if its any good.

It's patch #101423.  JimF, can you take a look and give a thumbs up or
down?  Or better yet, apply it to your canonical copy and send us an
update for the core.

http://sourceforge.net/patch/?func=detailpatch&patch_id=101423&group_id=5470

-Barry

From martin@loewis.home.cs.tu-berlin.de  Sun Sep 17 12:58:32 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 17 Sep 2000 13:58:32 +0200
Subject: [Python-Dev] [ Bug #110662 ] rfc822 (PR#358)
Message-ID: <200009171158.NAA01325@loewis.home.cs.tu-berlin.de>

Regarding your report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=110662&group_id=5470

I can't reproduce the problem. In 2.0b1, 

>>> s="Location: https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004\r\n\r\n" 
>>> t=rfc822.Message(cStringIO.StringIO(s)) 
>>> t['location'] 
'https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004' 

works fine for me. If the line break between Location: and the URL in
the original report was intentional, rfc822.Message is right in
rejecting the header: Continuation lines must start with white space.

I also cannot see how the patch could improve anything; proper
continuation lines are already supported. On what system did you
experience the problem?

If I misunderstood the report, please let me know.

Regards,
Martin

From trentm@ActiveState.com  Sun Sep 17 22:27:18 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sun, 17 Sep 2000 14:27:18 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
Message-ID: <20000917142718.A25180@ActiveState.com>

I get the following error trying to import _tkinter in a Python 2.0 build:

> ./python
./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory


Here is the relevant section of my Modules/Setup:

_tkinter _tkinter.c tkappinit.c -DWITH_APPINIT \
    -I/usr/local/include \
    -I/usr/X11R6/include \
    -L/usr/local/lib \
    -ltk8.3 -ltcl8.3 \
    -L/usr/X11R6/lib \
    -lX11


I got the Tcl/Tk 8.3 source from dev.scriptics.com, and ran
  > ./configure --enable-gcc --enable-shared
  > make
  > make install   # as root
in the tcl and tk source directories.


The tcl and tk libs are in /usr/local/lib:

    [trentm@molotok contrib]$ ls -alF /usr/local/lib
    ...
    -r-xr-xr-x   1 root     root       579177 Sep 17 14:03 libtcl8.3.so*
    -rw-r--r--   1 root     root         1832 Sep 17 14:03 libtclstub8.3.a
    -r-xr-xr-x   1 root     root       778034 Sep 17 14:10 libtk8.3.so*
    -rw-r--r--   1 root     root         3302 Sep 17 14:10 libtkstub8.3.a
    drwxr-xr-x   8 root     root         4096 Sep 17 14:03 tcl8.3/
    -rw-r--r--   1 root     root         6722 Sep 17 14:03 tclConfig.sh
    drwxr-xr-x   4 root     root         4096 Sep 17 14:10 tk8.3/
    -rw-r--r--   1 root     root         3385 Sep 17 14:10 tkConfig.sh


Does anybody know what my problem is? Is the error from libtk8.3.so
complaining that it cannot load a library on which it depends? Is there some
system library dependency that I am likely missing?


Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com

From trentm@ActiveState.com  Sun Sep 17 22:46:14 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sun, 17 Sep 2000 14:46:14 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <20000917142718.A25180@ActiveState.com>; from trentm@ActiveState.com on Sun, Sep 17, 2000 at 02:27:18PM -0700
References: <20000917142718.A25180@ActiveState.com>
Message-ID: <20000917144614.A25718@ActiveState.com>

On Sun, Sep 17, 2000 at 02:27:18PM -0700, Trent Mick wrote:
> 
> I get the following error trying to import _tkinter in a Python 2.0 build:
> 
> > ./python
> ./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory
> 

Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to /usr/local/lib)
and everything is hunky dory. I presumed that /usr/local/lib would be
on the default search path for shared libraries. Bad assumption I guess.

Trent


-- 
Trent Mick
TrentM@ActiveState.com

From martin@loewis.home.cs.tu-berlin.de  Mon Sep 18 07:59:33 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Mon, 18 Sep 2000 08:59:33 +0200
Subject: [Python-Dev] problems importing _tkinter on Linux build
Message-ID: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>

> I presumed that /usr/local/lib would be on the default search path
> for shared libraries. Bad assumption I guess.

On Linux, having /usr/local/lib in the search path is quite
common. The default search path is defined in /etc/ld.so.conf. What
distribution are you using? Perhaps somebody forgot to run
/sbin/ldconfig after installing the tcl library? Does tclsh find it?

Regards,
Martin


From jbearce@copeland.com  Mon Sep 18 12:22:36 2000
From: jbearce@copeland.com (jbearce@copeland.com)
Date: Mon, 18 Sep 2000 07:22:36 -0400
Subject: [Python-Dev] Re: [ Bug #110662 ] rfc822 (PR#358)
Message-ID: <OF66DA0B3D.234625E6-ON8525695E.003DFEEF@rsd.citistreet.org>

No, the line break wasn't intentional.  I ran into this problem on a stock
RedHat 6.2 (intel) system with python 1.5.2 reading pages from an iPlanet
Enterprise Server 4.1 on an NT box.  The patch I included fixed the problem
for me.  This was a consistent problem for me so I should be able to
reproduce the problem, and I send you any new info I can gather.  I'll also
try 2.0b1 with my script to see if it works.

Thanks,
Jim



                                                                                                                                
                    "Martin v. Loewis"                                                                                          
                    <martin@loewis.home.cs.tu-        To:     jbearce@copeland.com                                              
                    berlin.de>                        cc:     python-dev@python.org                                             
                                                      Subject:     [ Bug #110662 ] rfc822 (PR#358)                              
                    09/17/2000 07:58 AM                                                                                         
                                                                                                                                
                                                                                                                                




Regarding your report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=110662&group_id=5470

I can't reproduce the problem. In 2.0b1,

>>> s="Location:
https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004\r\n\r\n
"
>>> t=rfc822.Message(cStringIO.StringIO(s))
>>> t['location']
'https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004'


works fine for me. If the line break between Location: and the URL in
the original report was intentional, rfc822.Message is right in
rejecting the header: Continuation lines must start with white space.

I also cannot see how the patch could improve anything; proper
continuation lines are already supported. On what system did you
experience the problem?

If I misunderstood the report, please let me know.

Regards,
Martin




From bwarsaw@beopen.com  Mon Sep 18 14:35:32 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 18 Sep 2000 09:35:32 -0400 (EDT)
Subject: [Python-Dev] problems importing _tkinter on Linux build
References: <20000917142718.A25180@ActiveState.com>
 <20000917144614.A25718@ActiveState.com>
Message-ID: <14790.6692.908424.16235@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:

    TM> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to
    TM> /usr/local/lib) and everything is hunky dory. I presumed that
    TM> /usr/local/lib would be on the default search path for shared
    TM> libraries. Bad assumption I guess.

Also, look at the -R flag to ld.  In my experience (primarily on
Solaris), any time you compiled with a -L flag you absolutely /had/ to
include a similar -R flag, otherwise you'd force all your users to set
LD_LIBRARY_PATH.

-Barry

From trentm@ActiveState.com  Mon Sep 18 17:39:04 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Mon, 18 Sep 2000 09:39:04 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <14790.6692.908424.16235@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 18, 2000 at 09:35:32AM -0400
References: <20000917142718.A25180@ActiveState.com> <20000917144614.A25718@ActiveState.com> <14790.6692.908424.16235@anthem.concentric.net>
Message-ID: <20000918093904.A23881@ActiveState.com>

On Mon, Sep 18, 2000 at 09:35:32AM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:
> 
>     TM> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to
>     TM> /usr/local/lib) and everything is hunky dory. I presumed that
>     TM> /usr/local/lib would be on the default search path for shared
>     TM> libraries. Bad assumption I guess.
> 
> Also, look at the -R flag to ld.  In my experience (primarily on
> Solaris), any time you compiled with a -L flag you absolutely /had/ to
> include a similar -R flag, otherwise you'd force all your users to set
> LD_LIBRARY_PATH.
> 

Thanks, Barry. Reading about -R led me to -rpath, which works for me. Here is
the algorithm from the info docs:

`-rpath-link DIR'
     When using ELF or SunOS, one shared library may require another.
     This happens when an `ld -shared' link includes a shared library
     as one of the input files.

     When the linker encounters such a dependency when doing a
     non-shared, non-relocateable link, it will automatically try to
     locate the required shared library and include it in the link, if
     it is not included explicitly.  In such a case, the `-rpath-link'
     option specifies the first set of directories to search.  The
     `-rpath-link' option may specify a sequence of directory names
     either by specifying a list of names separated by colons, or by
     appearing multiple times.

     The linker uses the following search paths to locate required
     shared libraries.
       1. Any directories specified by `-rpath-link' options.

       2. Any directories specified by `-rpath' options.  The difference
          between `-rpath' and `-rpath-link' is that directories
          specified by `-rpath' options are included in the executable
          and used at runtime, whereas the `-rpath-link' option is only
          effective at link time.

       3. On an ELF system, if the `-rpath' and `rpath-link' options
          were not used, search the contents of the environment variable
          `LD_RUN_PATH'.

       4. On SunOS, if the `-rpath' option was not used, search any
          directories specified using `-L' options.

       5. For a native linker, the contents of the environment variable
          `LD_LIBRARY_PATH'.

       6. The default directories, normally `/lib' and `/usr/lib'.

     For the native ELF linker, as the last resort, the contents of
     /etc/ld.so.conf is used to build the set of directories to search.

     If the required shared library is not found, the linker will issue
     a warning and continue with the link.


Trent


-- 
Trent Mick
TrentM@ActiveState.com

From trentm@ActiveState.com  Mon Sep 18 17:42:51 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Mon, 18 Sep 2000 09:42:51 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>; from martin@loewis.home.cs.tu-berlin.de on Mon, Sep 18, 2000 at 08:59:33AM +0200
References: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>
Message-ID: <20000918094251.B23881@ActiveState.com>

On Mon, Sep 18, 2000 at 08:59:33AM +0200, Martin v. Loewis wrote:
> > I presumed that /usr/local/lib would be on the default search path
> > for shared libraries. Bad assumption I guess.
> 
> On Linux, having /usr/local/lib in the search path is quite
> common. The default search path is defined in /etc/ld.so.conf. What
> distribution are you using? Perhaps somebody forgot to run
> /sbin/ldconfig after installing the tcl library? Does tclsh find it?

Using RedHat 6.2


[trentm@molotok ~]$ cat /etc/ld.so.conf
/usr/X11R6/lib
/usr/i486-linux-libc5/lib


So no /usr/local/lib there. Barry's suggestion worked for me, though I think
I agree that /usr/local/lib is a reason include path.

Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com

From jeremy@beopen.com  Mon Sep 18 23:33:02 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 18 Sep 2000 18:33:02 -0400 (EDT)
Subject: [Python-Dev] guidelines for bug triage
Message-ID: <14790.38942.543387.233812@bitdiddle.concentric.net>

Last week I promised to post some guidelines on bug triage.  In the
interim, the number of open bugs has dropped by about 30.  We still
have 71 open bugs to deal with.  The goal is to get the number of open
bugs below 50 before the 2.0b2 release next week, so there is still a
lot to do.  So I've written up some general guidelines, which I'll
probably put in a PEP.

One thing that the guidelines lack are a list of people willing to
handle bug reports and their areas of expertise.  If people send me
email with that information, I'll include it in the PEP.

Jeremy


1. Make sure the bug category and bug group are correct.  If they are 
   correct, it is easier for someone interested in helping to find
   out, say, what all the open Tkinter bugs are.

2. If it's a minor feature request that you don't plan to address
   right away, add it to PEP 42 or ask the owner to add it for you.
   If you add the bug to PEP 42, mark the bug as "feature request",
   "later", and "closed"; and add a comment to the bug saying that
   this is the case (mentioning the PEP explicitly).

3. Assign the bug a reasonable priority.  We don't yet have a clear
   sense of what each priority should mean, except than 9 is highest
   and 1 is lowest.  One rule, however, is that bugs with priority
   seven or higher must be fixed before the next release.

4. If a bug report doesn't have enough information to allow you to
   reproduce or diagnose it, send email to the original submittor and
   ask for more information.  If the original report is really thin
   and your email doesn't get a response after a reasonable waiting
   period, you can close the bug.

5. If you fix a bug, mark the status as "Fixed" and close it.  In the
   comments, including the CVS revision numbers of the affected
   files.  In the CVS checkin message, include the SourceForge bug
   number *and* a normal description of the change.

6. If you are assigned a bug that you are unable to deal with assign
   it to someone else.  The guys at PythonLabs get paid to fix these
   bugs, so pick one of them if there is no other obvious candidate.


From barry@scottb.demon.co.uk  Mon Sep 18 23:28:46 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Mon, 18 Sep 2000 23:28:46 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <000001c021bf$cf081f20$060210ac@private>

I have managed to get all our critical python code up and
running under 2.0b1#4, around 15,000 lines. We use win32com
and wxPython extensions. The code drive SourceSafe and includes
a Web server that schedules builds for us.

The only problem I encounted was the problem of mixing string
and unicode types.

Using the smtplib I was passing in a unicode type as the body
of the message. The send() call hangs. I use encode() and all
is well.

Is this a user error in the use of smtplib or a bug?

I found that I had a lot of unicode floating around from win32com
that I was passing into wxPython. It checks for string and raises
exceptions. More use of encode() and we are up and running.

Is this what you expected when you added unicode?

		Barry


From barry@scottb.demon.co.uk  Mon Sep 18 23:43:59 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Mon, 18 Sep 2000 23:43:59 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>
Message-ID: <000201c021c1$ef71c7f0$060210ac@private>

At the risk of having my head bitten off again...

Why don't you tell people how to report bugs in python on the web site
or the documentation?

I'd expect this info in the docs and on the web site for python.

	BArry


From guido@beopen.com  Tue Sep 19 00:45:12 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 18:45:12 -0500
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: Your message of "Mon, 18 Sep 2000 23:28:46 +0100."
 <000001c021bf$cf081f20$060210ac@private>
References: <000001c021bf$cf081f20$060210ac@private>
Message-ID: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>

> I have managed to get all our critical python code up and
> running under 2.0b1#4, around 15,000 lines. We use win32com
> and wxPython extensions. The code drive SourceSafe and includes
> a Web server that schedules builds for us.
> 
> The only problem I encounted was the problem of mixing string
> and unicode types.
> 
> Using the smtplib I was passing in a unicode type as the body
> of the message. The send() call hangs. I use encode() and all
> is well.
> 
> Is this a user error in the use of smtplib or a bug?
> 
> I found that I had a lot of unicode floating around from win32com
> that I was passing into wxPython. It checks for string and raises
> exceptions. More use of encode() and we are up and running.
> 
> Is this what you expected when you added unicode?

Barry, I'm unclear on what exactly is happening.  Where does the
Unicode come from?  You implied that your code worked under 1.5.2,
which doesn't support Unicode.  How can code that works under 1.5.2
suddenly start producing Unicode strings?  Unless you're now applying
the existing code to new (Unicode) input data -- in which case, yes,
we expect that fixes are sometimes needed.

The smtplib problem may be easily explained -- AFAIK, the SMTP
protocol doesn't support Unicode, and the module isn't Unicode-aware,
so it is probably writing garbage to the socket.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From guido@beopen.com  Tue Sep 19 00:51:26 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 18:51:26 -0500
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: Your message of "Mon, 18 Sep 2000 23:43:59 +0100."
 <000201c021c1$ef71c7f0$060210ac@private>
References: <000201c021c1$ef71c7f0$060210ac@private>
Message-ID: <200009182351.SAA03195@cj20424-a.reston1.va.home.com>

> At the risk of having my head bitten off again...

Don't worry, it's only a virtual bite... :-)

> Why don't you tell people how to report bugs in python on the web site
> or the documentation?
> 
> I'd expect this info in the docs and on the web site for python.

In the README file:

    Bug reports
    -----------

    To report or search for bugs, please use the Python Bug
    Tracker at http://sourceforge.net/bugs/?group_id=5470.

But I agree that nobody reads the README file any more.  So yes, it
should be added to the website.  I don't think it belongs in the
documentation pack, although Fred may disagree (where should it be
added?).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From barry@scottb.demon.co.uk  Tue Sep 19 00:00:13 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 00:00:13 +0100
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <000701c021c4$3412d550$060210ac@private>

There needs to be a set of benchmarks that can be used to test the effect
of any changes. Is there a set that exist already that can be used?

		Barry


> Behalf Of Vladimir Marangozov
> 
> Continuing my impressions on the user's feedback to date: Donn Cave
> & MAL are at least two voices I've heard about an overall slowdown
> of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
> this slowdown comes from and I believe that we have only vague guesses
> about the possible causes: unicode database, more opcodes in ceval, etc.
> 
> I wonder whether we are in a position to try improving Python's
> performance with some `wise quickies' in a next beta. But this raises
> a more fundamental question on what is our margin for manoeuvres at this
> point. This in turn implies that we need some classification of the
> proposed optimizations to date.
> 
> Perhaps it would be good to create a dedicated Web page for this, but
> in the meantime, let's try to build a list/table of the ideas that have
> been proposed so far. This would be useful anyway, and the list would be
> filled as time goes.
> 
> Trying to push this initiative one step further, here's a very rough start
> on the top of my head:
> 
> Category 1: Algorithmic Changes
> 
> These are the most promising, since they don't relate to pure technicalities
> but imply potential improvements with some evidence.
> I'd put in this category:
> 
> - the dynamic dictionary/string specialization by Fred Drake
>   (this is already in). Can this be applied in other areas? If so, where?
> 
> - the Python-specific mallocs. Actually, I'm pretty sure that a lot of
>   `overhead' is due to the standard mallocs which happen to be expensive
>   for Python in both space and time. Python is very malloc-intensive.
>   The only reason I've postponed my obmalloc patch is that I still haven't
>   provided an interface which allows evaluating it's impact on the
>   mem size consumption. It gives noticeable speedup on all machines, so
>   it accounts as a good candidate w.r.t. performance.
> 
> - ??? (maybe some parts of MAL's optimizations could go here)
> 
> Category 2: Technical / Code optimizations
> 
> This category includes all (more or less) controversial proposals, like
> 
> - my latest lookdict optimizations (a typical controversial `quickie')
> 
> - opcode folding & reordering. Actually, I'm unclear on why Guido
>   postponed the reordering idea; it has received positive feedback
>   and all theoretical reasoning and practical experiments showed that
>   this "could" help, although without any guarantees. Nobody reported
>   slowdowns, though. This is typically a change without real dangers.
> 
> - kill the async / pending calls logic. (Tim, what happened with this
>   proposal?)
> 
> - compact the unicodedata database, which is expected to reduce the
>   mem footprint, maybe improve startup time, etc. (ongoing)
> 
> - proposal about optimizing the "file hits" on startup.
> 
> - others?
> 
> If there are potential `wise quickies', meybe it's good to refresh
> them now and experiment a bit more before the final release?
> 
> -- 
>        Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
> http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev
> 

From MarkH@ActiveState.com  Tue Sep 19 00:18:18 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 19 Sep 2000 10:18:18 +1100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEPDDJAA.MarkH@ActiveState.com>

[Guido]

> Barry, I'm unclear on what exactly is happening.  Where does the
> Unicode come from?  You implied that your code worked under 1.5.2,
> which doesn't support Unicode.  How can code that works under 1.5.2
> suddenly start producing Unicode strings?  Unless you're now applying
> the existing code to new (Unicode) input data -- in which case, yes,
> we expect that fixes are sometimes needed.

My guess is that the Unicode strings are coming from COM.  In 1.5, we used
the Win32 specific Unicode object, and win32com did lots of explicit
str()s - the user of the end object usually saw real Python strings.

For 1.6 and later, I changed this, so that real Python Unicode objects are
used and returned instead of the strings.  I figured this would be a good
test for Unicode integration, as Unicode and strings are ultimately
supposed to be interchangeable ;-)

win32com.client.__init__ starts with:

NeedUnicodeConversions = not hasattr(__builtin__, "unicode")

This forces the flag "true" 1.5, and false otherwise.  Barry can force it
to "true", and win32com will always force a str() over all Unicode objects.

However, this will _still_ break in a few cases (and I have had some
reported).  str() of a Unicode object can often raise that ugly "char out
of range" error.  As Barry notes, the code would have to change to do an
"encode('mbcs')" to be safe anyway...

But regardless of where Barry's Unicode objects come from, his point
remains open.  Do we consider the library's lack of Unicode awareness a
bug, or do we drop any pretence of string and unicode objects being
interchangeable?

As a related issue, do we consider that str(unicode_ob) often fails is a
problem?  The users on c.l.py appear to...

Mark.


From gward@mems-exchange.org  Tue Sep 19 00:29:00 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 18 Sep 2000 19:29:00 -0400
Subject: [Python-Dev] Speaking of bug triage...
Message-ID: <20000918192859.A12253@ludwig.cnri.reston.va.us>

... just what are the different categories supposed to mean?
Specifically, what's the difference between "Library" and "Modules"?

The library-related open bugs in the "Library" category cover the
following modules:
  * anydbm
  * rfc822 (several!)
  * mimedecode
  * urlparse
  * cmath
  * CGIHTTPServer

And in the "Modules" category we have:
  * mailbox
  * socket/os
  * re/sre (several)
  * anydbm
  * xml/_xmlplus
  * cgi/xml

Hmmm... looks to me like there's no difference between "Library" and
"Modules" -- heck, I could have guessed that just from looking at the
names.  The library *is* modules!

Was this perhaps meant to be a distinction between pure Python and
extension modules?

        Greg

From jeremy@beopen.com  Tue Sep 19 00:36:41 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 18 Sep 2000 19:36:41 -0400 (EDT)
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <20000918192859.A12253@ludwig.cnri.reston.va.us>
References: <20000918192859.A12253@ludwig.cnri.reston.va.us>
Message-ID: <14790.42761.418440.578432@bitdiddle.concentric.net>

>>>>> "GW" == Greg Ward <gward@mems-exchange.org> writes:

  GW> Was this perhaps meant to be a distinction between pure Python
  GW> and extension modules?

That's right -- Library == ".py" and Modules == ".c".  Perhaps not the
best names, but they're short.

Jeremy

From tim_one@email.msn.com  Tue Sep 19 00:34:30 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 18 Sep 2000 19:34:30 -0400
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <20000918192859.A12253@ludwig.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>

[Greg Ward]
> ... just what are the different categories supposed to mean?
> Specifically, what's the difference between "Library" and "Modules"?

Nobody knows.  I've been using Library for .py files under Lib/, and Modules
for anything written in C whose name works in an "import".  Other people are
doing other things, but they're wrong <wink>.



From guido@beopen.com  Tue Sep 19 01:43:17 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 19:43:17 -0500
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: Your message of "Mon, 18 Sep 2000 19:36:41 -0400."
 <14790.42761.418440.578432@bitdiddle.concentric.net>
References: <20000918192859.A12253@ludwig.cnri.reston.va.us>
 <14790.42761.418440.578432@bitdiddle.concentric.net>
Message-ID: <200009190043.TAA06331@cj20424-a.reston1.va.home.com>

>   GW> Was this perhaps meant to be a distinction between pure Python
>   GW> and extension modules?
> 
> That's right -- Library == ".py" and Modules == ".c".  Perhaps not the
> best names, but they're short.

Think "subdirectories in the source tree" and you'll never make a
mistake again.  (For this particular choice. :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From barry@scottb.demon.co.uk  Tue Sep 19 00:43:25 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 00:43:25 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>
Message-ID: <000801c021ca$3c9daa50$060210ac@private>

Mark's Python COM code is the source of unicode. I'm guessing that the old
1.5.2 support coerced to string and now that unicode is around Mark's
code gives me unicode strings. Our app is driving Microsoft visual
SourceSafe thru COM.

The offending line that upgraded all strings to unicode that broke mail:

file.write( 'Crit: Searching for new and changed files since label %s\n' % previous_source_label )

previous_source_label is unicode from a call to a COM object.

file is a StringIO object.

		Barry

> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Guido van Rossum
> Sent: 19 September 2000 00:45
> To: Barry Scott
> Cc: PythonDev
> Subject: Re: [Python-Dev] Python 1.5.2 modules need porting to 2.0
> because of unicode - comments please
> 
> 
> > I have managed to get all our critical python code up and
> > running under 2.0b1#4, around 15,000 lines. We use win32com
> > and wxPython extensions. The code drive SourceSafe and includes
> > a Web server that schedules builds for us.
> > 
> > The only problem I encounted was the problem of mixing string
> > and unicode types.
> > 
> > Using the smtplib I was passing in a unicode type as the body
> > of the message. The send() call hangs. I use encode() and all
> > is well.
> > 
> > Is this a user error in the use of smtplib or a bug?
> > 
> > I found that I had a lot of unicode floating around from win32com
> > that I was passing into wxPython. It checks for string and raises
> > exceptions. More use of encode() and we are up and running.
> > 
> > Is this what you expected when you added unicode?
> 
> Barry, I'm unclear on what exactly is happening.  Where does the
> Unicode come from?  You implied that your code worked under 1.5.2,
> which doesn't support Unicode.  How can code that works under 1.5.2
> suddenly start producing Unicode strings?  Unless you're now applying
> the existing code to new (Unicode) input data -- in which case, yes,
> we expect that fixes are sometimes needed.
> 
> The smtplib problem may be easily explained -- AFAIK, the SMTP
> protocol doesn't support Unicode, and the module isn't Unicode-aware,
> so it is probably writing garbage to the socket.
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev
> 

From fdrake@beopen.com  Tue Sep 19 00:45:55 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 18 Sep 2000 19:45:55 -0400 (EDT)
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <000201c021c1$ef71c7f0$060210ac@private>
References: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>
 <000201c021c1$ef71c7f0$060210ac@private>
Message-ID: <14790.43315.8034.192884@cj42289-a.reston1.va.home.com>

Barry Scott writes:
 > At the risk of having my head bitten off again...
 > 
 > Why don't you tell people how to report bugs in python on the web site
 > or the documentation?
 > 
 > I'd expect this info in the docs and on the web site for python.

  Good point.  I think this should be available at both locations as
well.  I'll see what I can do about the documentation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member


From gward@mems-exchange.org  Tue Sep 19 00:55:35 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 18 Sep 2000 19:55:35 -0400
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Sep 18, 2000 at 07:34:30PM -0400
References: <20000918192859.A12253@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>
Message-ID: <20000918195535.A19131@ludwig.cnri.reston.va.us>

On 18 September 2000, Tim Peters said:
> Nobody knows.  I've been using Library for .py files under Lib/, and Modules
> for anything written in C whose name works in an "import".  Other people are
> doing other things, but they're wrong <wink>.

That's what I suspected.  I've just reclassified a couple of bugs.  I
left ambiguous ones where they were.

        Greg

From barry@scottb.demon.co.uk  Tue Sep 19 01:05:17 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 01:05:17 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEPDDJAA.MarkH@ActiveState.com>
Message-ID: <000901c021cd$4a9b2df0$060210ac@private>

> But regardless of where Barry's Unicode objects come from, his point
> remains open.  Do we consider the library's lack of Unicode awareness a
> bug, or do we drop any pretence of string and unicode objects being
> interchangeable?
> 
> As a related issue, do we consider that str(unicode_ob) often fails is a
> problem?  The users on c.l.py appear to...
> 
> Mark.

Exactly.

I want unicode from Mark's code, unicode is goodness.

But the principle of least astonishment may well be broken in the library,
indeed in the language.

It took me 40 minutes to prove that the unicode came from Mark's code and
I know the code involved intimately. Debugging these failures is tedious.

I don't have an opinion as to the best resolution yet.

One option would be for Mark's code to default to string. But that does not
help once someone chooses to enable unicode in Mark's code.

Maybe '%s' % u'x' should return 'x' not u'x' and u'%s' % 's' return u's'

Maybe 's' + u'x' should return 'sx' not u'sx'. and u's' + 'x' returns u'sx'

The above 2 maybe's would have hidden the problem in my code, baring exceptions.

	Barry


From barry@scottb.demon.co.uk  Tue Sep 19 01:13:33 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 01:13:33 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <200009182351.SAA03195@cj20424-a.reston1.va.home.com>
Message-ID: <000a01c021ce$72b5cab0$060210ac@private>

What README? Its not on my Start - Programs - Python 2.0 menu.

You don't mean I have to look on the disk do you :-)

	Barry

> -----Original Message-----
> From: guido@cj20424-a.reston1.va.home.com
> [mailto:guido@cj20424-a.reston1.va.home.com]On Behalf Of Guido van
> Rossum
> Sent: 19 September 2000 00:51
> To: Barry Scott
> Cc: PythonDev
> Subject: Re: [Python-Dev] How do you want bugs reported against 2.0
> beta?
> 
> 
> > At the risk of having my head bitten off again...
> 
> Don't worry, it's only a virtual bite... :-)
> 
> > Why don't you tell people how to report bugs in python on the web site
> > or the documentation?
> > 
> > I'd expect this info in the docs and on the web site for python.
> 
> In the README file:
> 
>     Bug reports
>     -----------
> 
>     To report or search for bugs, please use the Python Bug
>     Tracker at http://sourceforge.net/bugs/?group_id=5470.
> 
> But I agree that nobody reads the README file any more.  So yes, it
> should be added to the website.  I don't think it belongs in the
> documentation pack, although Fred may disagree (where should it be
> added?).
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 

From tim_one@email.msn.com  Tue Sep 19 01:22:13 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 18 Sep 2000 20:22:13 -0400
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <000701c021c4$3412d550$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENBHGAA.tim_one@email.msn.com>

[Barry Scott]
> There needs to be a set of benchmarks that can be used to test
> the effect of any changes. Is there a set that exist already that
> can be used?

None adequate.  Calls for volunteers in the past have been met with silence.

Lib/test/pyttone.py is remarkable in that it be the least typical of all
Python programs <0.4 wink>.  It seems a good measure of how long it takes to
make a trip around the eval loop, though.

Marc-Andre Lemburg put together a much fancier suite, that times a wide
variety of basic Python operations and constructs more-or-less in isolation
from each other.  It can be very helpful in pinpointing specific timing
regressions.

That's it.



From tim_one@email.msn.com  Tue Sep 19 05:44:56 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 19 Sep 2000 00:44:56 -0400
Subject: [Python-Dev] test_minidom now failing on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com>

http://sourceforge.net/bugs/?func=detailbug&bug_id=114775&group_id=5470

Add info (fails on Linux?  Windows-specific?) or fix or something; assigned
to Paul.



From guido@beopen.com  Tue Sep 19 07:05:55 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 01:05:55 -0500
Subject: [Python-Dev] test_minidom now failing on Windows
In-Reply-To: Your message of "Tue, 19 Sep 2000 00:44:56 -0400."
 <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com>
Message-ID: <200009190605.BAA01019@cj20424-a.reston1.va.home.com>

> http://sourceforge.net/bugs/?func=detailbug&bug_id=114775&group_id=5470
> 
> Add info (fails on Linux?  Windows-specific?) or fix or something; assigned
> to Paul.

It's obviously broken.  The test output contains numbers that are
specific per run:

<xml.dom.minidom.Document instance at 0xa104c8c>

and

[('168820100<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}"), ('168926628<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168722260<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168655020<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168650868<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168663308<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168846892<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('169039972<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168666508<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}"), ('168730780<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}")]

Paul, please fix this!!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From martin@loewis.home.cs.tu-berlin.de  Tue Sep 19 09:13:16 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 19 Sep 2000 10:13:16 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>

> The smtplib problem may be easily explained -- AFAIK, the SMTP
> protocol doesn't support Unicode, and the module isn't
> Unicode-aware, so it is probably writing garbage to the socket.

I've investigated this somewhat, and noticed the cause of the problem.
The send method of the socket passes the raw memory representation of
the Unicode object to send(2). On i386, this comes out as UTF-16LE.

It appears that this behaviour is not documented anywhere (where is
the original specification of the Unicode type, anyway).

I believe this behaviour is a bug, on the grounds of being
confusing. The same holds for writing a Unicode string to a file in
binary mode. Again, it should not write out the internal
representation. Or else, why doesn't file.write(42) work? I want that
it writes the internal representation in binary :-)

So in essence, I suggest that the Unicode object does not implement
the buffer interface. If that has any undesirable consequences (which
ones?), I suggest that 'binary write' operations (sockets, files)
explicitly check for Unicode objects, and either reject them, or
invoke the system encoding (i.e. ASCII). 

In the case of smtplib, this would do the right thing: the protocol
requires ASCII commands, so if anybody passes a Unicode string with
characters outside ASCII, you'd get an error.

Regards,
Martin


From Fredrik Lundh" <effbot@telia.com  Tue Sep 19 09:35:29 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 19 Sep 2000 10:35:29 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>
Message-ID: <00cd01c02214$94c4f540$766940d5@hagrid>

martin wrote:

> I've investigated this somewhat, and noticed the cause of the problem.
> The send method of the socket passes the raw memory representation of
> the Unicode object to send(2). On i386, this comes out as UTF-16LE.
...
> I believe this behaviour is a bug, on the grounds of being
> confusing. The same holds for writing a Unicode string to a file in
> binary mode. Again, it should not write out the internal
> representation. Or else, why doesn't file.write(42) work? I want that
> it writes the internal representation in binary :-)
...
> So in essence, I suggest that the Unicode object does not implement
> the buffer interface.

I agree.

</F>


From mal@lemburg.com  Tue Sep 19 09:35:33 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 10:35:33 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <LNBBLJKPBEHFEDALKOLCKENBHGAA.tim_one@email.msn.com>
Message-ID: <39C72555.E14D747C@lemburg.com>

Tim Peters wrote:
> 
> [Barry Scott]
> > There needs to be a set of benchmarks that can be used to test
> > the effect of any changes. Is there a set that exist already that
> > can be used?
> 
> None adequate.  Calls for volunteers in the past have been met with silence.
> 
> Lib/test/pyttone.py is remarkable in that it be the least typical of all
> Python programs <0.4 wink>.  It seems a good measure of how long it takes to
> make a trip around the eval loop, though.
> 
> Marc-Andre Lemburg put together a much fancier suite, that times a wide
> variety of basic Python operations and constructs more-or-less in isolation
> from each other.  It can be very helpful in pinpointing specific timing
> regressions.

Plus it's extensible, so you can add whatever test you feel you
need by simply dropping in a new module and editing a Setup
module. pybench is available from my Python Pages.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From mal@lemburg.com  Tue Sep 19 10:02:46 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 11:02:46 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of
 unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>
Message-ID: <39C72BB6.A45A8E77@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > The smtplib problem may be easily explained -- AFAIK, the SMTP
> > protocol doesn't support Unicode, and the module isn't
> > Unicode-aware, so it is probably writing garbage to the socket.
> 
> I've investigated this somewhat, and noticed the cause of the problem.
> The send method of the socket passes the raw memory representation of
> the Unicode object to send(2). On i386, this comes out as UTF-16LE.

The send method probably uses "s#" to write out the data. Since
this maps to the getreadbuf buffer slot, the Unicode object returns
a pointer to the internal buffer.
 
> It appears that this behaviour is not documented anywhere (where is
> the original specification of the Unicode type, anyway).

Misc/unicode.txt has it all. Documentation for PyArg_ParseTuple()
et al. is in Doc/ext/ext.tex.
 
> I believe this behaviour is a bug, on the grounds of being
> confusing. The same holds for writing a Unicode string to a file in
> binary mode. Again, it should not write out the internal
> representation. Or else, why doesn't file.write(42) work? I want that
> it writes the internal representation in binary :-)

This was discussed on python-dev at length earlier this year.
The outcome was that files opened in binary mode should write
raw object data to the file (using getreadbuf) while file's opened
in text mode should write character data (using getcharbuf).
 
Note that Unicode objects are the first to make a difference
between getcharbuf and getreadbuf.

IMHO, the bug really is in getargs.c: "s" uses getcharbuf while
"s#" uses getreadbuf. Ideal would be using "t"+"t#" exclusively
for getcharbuf and "s"+"s#" exclusively for getreadbuf, but I guess
common usage prevents this.

> So in essence, I suggest that the Unicode object does not implement
> the buffer interface. If that has any undesirable consequences (which
> ones?), I suggest that 'binary write' operations (sockets, files)
> explicitly check for Unicode objects, and either reject them, or
> invoke the system encoding (i.e. ASCII).

It's too late for any generic changes in the Unicode area.

The right thing to do is to make the *tools* Unicode aware, since
you can't really expect the Unicode-string integration mechanism 
to fiddle things right in every possible case out there.

E.g. in the above case it is clear that 8-bit text is being sent over
the wire, so the smtplib module should explicitly call the .encode()
method to encode the data into whatever encoding is suitable.

> In the case of smtplib, this would do the right thing: the protocol
> requires ASCII commands, so if anybody passes a Unicode string with
> characters outside ASCII, you'd get an error.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From mal@lemburg.com  Tue Sep 19 10:13:13 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 11:13:13 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of
 unicode - comments please
References: <000901c021cd$4a9b2df0$060210ac@private>
Message-ID: <39C72E29.6593F920@lemburg.com>

Barry Scott wrote:
> 
> > But regardless of where Barry's Unicode objects come from, his point
> > remains open.  Do we consider the library's lack of Unicode awareness a
> > bug, or do we drop any pretence of string and unicode objects being
> > interchangeable?

Python's stdlib is *not* Unicode ready. This should be seen a project
for 2.1.

> > As a related issue, do we consider that str(unicode_ob) often fails is a
> > problem?  The users on c.l.py appear to...

It will only fail if the Unicode object is not compatible with the
default encoding. If users want to use a different encoding for
interfacing Unicode to strings they should call .encode explicitely,
possible through a helper function.

> > Mark.
> 
> Exactly.
> 
> I want unicode from Mark's code, unicode is goodness.
> 
> But the principle of least astonishment may well be broken in the library,
> indeed in the language.
> 
> It took me 40 minutes to prove that the unicode came from Mark's code and
> I know the code involved intimately. Debugging these failures is tedious.

To debug these things, simply switch off Unicode to string conversion
by editing site.py (look at the comments at the end of the module).
All conversion tries will then result in an exception.

> I don't have an opinion as to the best resolution yet.
> 
> One option would be for Mark's code to default to string. But that does not
> help once someone chooses to enable unicode in Mark's code.
> 
> Maybe '%s' % u'x' should return 'x' not u'x' and u'%s' % 's' return u's'
> 
> Maybe 's' + u'x' should return 'sx' not u'sx'. and u's' + 'x' returns u'sx'
> 
> The above 2 maybe's would have hidden the problem in my code, baring exceptions.

When designing the Unicode-string integration we decided to
use the same coercion rules as for numbers: always coerce to the
"bigger" type. Anything else would have caused even more
difficulties.

Again, what needs to be done is to make the tools Unicode aware,
not the magic ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From fredrik@pythonware.com  Tue Sep 19 10:38:01 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 19 Sep 2000 11:38:01 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de> <39C72BB6.A45A8E77@lemburg.com>
Message-ID: <006601c0221d$4e55b690$0900a8c0@SPIFF>

mal wrote:

> > So in essence, I suggest that the Unicode object does not implement
> > the buffer interface. If that has any undesirable consequences (which
> > ones?), I suggest that 'binary write' operations (sockets, files)
> > explicitly check for Unicode objects, and either reject them, or
> > invoke the system encoding (i.e. ASCII).
> 
> It's too late for any generic changes in the Unicode area.

it's not too late to fix bugs.

> The right thing to do is to make the *tools* Unicode aware, since
> you can't really expect the Unicode-string integration mechanism 
> to fiddle things right in every possible case out there.

no, but people may expect Python to raise an exception instead
of doing something that is not only non-portable, but also clearly
wrong in most real-life cases.

</F>


From mal@lemburg.com  Tue Sep 19 11:34:40 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 12:34:40 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of
 unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de> <39C72BB6.A45A8E77@lemburg.com> <006601c0221d$4e55b690$0900a8c0@SPIFF>
Message-ID: <39C74140.B4A31C60@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > > So in essence, I suggest that the Unicode object does not implement
> > > the buffer interface. If that has any undesirable consequences (which
> > > ones?), I suggest that 'binary write' operations (sockets, files)
> > > explicitly check for Unicode objects, and either reject them, or
> > > invoke the system encoding (i.e. ASCII).
> >
> > It's too late for any generic changes in the Unicode area.
> 
> it's not too late to fix bugs.

I doubt that we can fix all Unicode related bugs in the 2.0
stdlib before the final release... let's make this a project 
for 2.1.
 
> > The right thing to do is to make the *tools* Unicode aware, since
> > you can't really expect the Unicode-string integration mechanism
> > to fiddle things right in every possible case out there.
> 
> no, but people may expect Python to raise an exception instead
> of doing something that is not only non-portable, but also clearly
> wrong in most real-life cases.

I completely agree that the divergence between "s" and "s#"
is not ideal at all, but that's something the buffer interface
design has to fix (not the Unicode design) since this is a
general problem. AFAIK, no other object makes a difference
between getreadbuf and getcharbuf... this is why the problem
has never shown up before.

Grepping through the stdlib, there are lots of places where
"s#" is expected to work on raw data and others where
conversion to string would be more appropriate, so the one
true solution is not clear at all.

Here are some possible hacks to work-around the Unicode problem:

1. switch off getreadbuf slot

   This would break many IO-calls w/r to Unicode support.

2. make getreadbuf return the same as getcharbuf (i.e. ASCII data)

   This could work, but would break slicing and indexing 
   for e.g. a UTF-8 default encoding.   

3. leave things as they are implemented now and live with the
   consequences (mark the Python stdlib as not Unicode compatible)

   Not ideal, but leaves room for discussion.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From loewis@informatik.hu-berlin.de  Tue Sep 19 13:11:00 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Tue, 19 Sep 2000 14:11:00 +0200 (MET DST)
Subject: [Python-Dev] sizehint in readlines
Message-ID: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>

I've added support for the sizehint parameter in all places where it
was missing and the documentation referred to the file objects section
(socket, StringIO, cStringIO). The only remaining place with a
readlines function without sizehint is in multifile.py. I'll observe
that the documentation of this module is quite confused: it mentions a
str parameter for readline and readlines.

Should multifile.MultiFile.readlines also support the sizehint? (note
that read() deliberately does not support a size argument).

Regards,
Martin

From loewis@informatik.hu-berlin.de  Tue Sep 19 13:16:29 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Tue, 19 Sep 2000 14:16:29 +0200 (MET DST)
Subject: [Python-Dev] fileno function in file objects
Message-ID: <200009191216.OAA06594@pandora.informatik.hu-berlin.de>

Section 2.1.7.9 of the library reference explains that file objects
support a fileno method. Is that a mandatory operation on file-like
objects (e.g. StringIO)? If so, how should it be implemented? If not,
shouldn't the documentation declare it optional?

The same question for documented attributes: closed, mode, name,
softspace: need file-like objects to support them?

Regards,
Martin

From mal@lemburg.com  Tue Sep 19 13:42:24 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 14:42:24 +0200
Subject: [Python-Dev] sizehint in readlines
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
Message-ID: <39C75F30.D23CEEF0@lemburg.com>

Martin von Loewis wrote:
> 
> I've added support for the sizehint parameter in all places where it
> was missing and the documentation referred to the file objects section
> (socket, StringIO, cStringIO). The only remaining place with a
> readlines function without sizehint is in multifile.py. I'll observe
> that the documentation of this module is quite confused: it mentions a
> str parameter for readline and readlines.
> 
> Should multifile.MultiFile.readlines also support the sizehint? (note
> that read() deliberately does not support a size argument).

Since it is an optional hint for the implementation, I'd suggest
adding the optional parameter without actually making any use of
it. The interface should be there though.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From mal@lemburg.com  Tue Sep 19 14:01:34 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 15:01:34 +0200
Subject: [Python-Dev] Deja-Search on python.org defunct
Message-ID: <39C763AE.4B126CB1@lemburg.com>

The search button on python.org doesn't search the c.l.p newsgroup
anymore, but instead doea a search over all newsgroups.

This link works:

http://www.deja.com/[ST_rn=ps]/qs.xp?ST=PS&svcclass=dnyr&firstsearch=yes&QRY=search_string_goes_here&defaultOp=AND&DBS=1&OP=dnquery.xp&LNG=english&subjects=&groups=comp.lang.python+comp.lang.python.announce&authors=&fromdate=&todate=&showsort=score&maxhits=25

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From guido@beopen.com  Tue Sep 19 15:28:42 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 09:28:42 -0500
Subject: [Python-Dev] sizehint in readlines
In-Reply-To: Your message of "Tue, 19 Sep 2000 14:11:00 +0200."
 <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
Message-ID: <200009191428.JAA02596@cj20424-a.reston1.va.home.com>

> I've added support for the sizehint parameter in all places where it
> was missing and the documentation referred to the file objects section
> (socket, StringIO, cStringIO). The only remaining place with a
> readlines function without sizehint is in multifile.py. I'll observe
> that the documentation of this module is quite confused: it mentions a
> str parameter for readline and readlines.

That's one for Fred...

> Should multifile.MultiFile.readlines also support the sizehint? (note
> that read() deliberately does not support a size argument).

I don't care about it here -- that API is clearly substandard.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From guido@beopen.com  Tue Sep 19 15:33:02 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 09:33:02 -0500
Subject: [Python-Dev] fileno function in file objects
In-Reply-To: Your message of "Tue, 19 Sep 2000 14:16:29 +0200."
 <200009191216.OAA06594@pandora.informatik.hu-berlin.de>
References: <200009191216.OAA06594@pandora.informatik.hu-berlin.de>
Message-ID: <200009191433.JAA02626@cj20424-a.reston1.va.home.com>

> Section 2.1.7.9 of the library reference explains that file objects
> support a fileno method. Is that a mandatory operation on file-like
> objects (e.g. StringIO)? If so, how should it be implemented? If not,
> shouldn't the documentation declare it optional?
> 
> The same question for documented attributes: closed, mode, name,
> softspace: need file-like objects to support them?

fileno() (and isatty()) is OS specific and only works if there *is* an
underlying file number.  It should not be implemented (not even as
raising an exception) if it isn't there.

Support for softspace is needed when you expect to be printing to a
file.

The others are implementation details of the built-in file object, but
would be nice to have if they can be implemented; code that requires
them is not fully supportive of file-like objects.

Note that this (and other, similar issues) is all because Python
doesn't have a standard class hierarchy.  I expect that we'll fix all
this in Python 3000.  Until then, I guess we have to muddle forth...

BTW, did you check in test cases for all the methods you fixed?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From bwarsaw@beopen.com  Tue Sep 19 16:43:15 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 19 Sep 2000 11:43:15 -0400 (EDT)
Subject: [Python-Dev] fileno function in file objects
References: <200009191216.OAA06594@pandora.informatik.hu-berlin.de>
 <200009191433.JAA02626@cj20424-a.reston1.va.home.com>
Message-ID: <14791.35219.817065.241735@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

    GvR> Note that this (and other, similar issues) is all because
    GvR> Python doesn't have a standard class hierarchy.

Or a formal interface mechanism.

-Barry

From bwarsaw@beopen.com  Tue Sep 19 16:43:50 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 19 Sep 2000 11:43:50 -0400 (EDT)
Subject: [Python-Dev] sizehint in readlines
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
 <200009191428.JAA02596@cj20424-a.reston1.va.home.com>
Message-ID: <14791.35254.565129.298375@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

    >> Should multifile.MultiFile.readlines also support the sizehint?
    >> (note that read() deliberately does not support a size
    >> argument).

    GvR> I don't care about it here -- that API is clearly
    GvR> substandard.

Indeed!
-Barry

From klm@digicool.com  Tue Sep 19 19:25:04 2000
From: klm@digicool.com (Ken Manheimer)
Date: Tue, 19 Sep 2000 14:25:04 -0400 (EDT)
Subject: [Python-Dev] fileno function in file objects - Interfaces
 Scarecrow
In-Reply-To: <14791.35219.817065.241735@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0009191357370.24497-200000@korak.digicool.com>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.
  Send mail to mime@docserver.cac.washington.edu for more info.

---1529346232-299838580-969387904=:24497
Content-Type: TEXT/PLAIN; charset=US-ASCII

Incidentally...

On Tue, 19 Sep 2000, Barry A. Warsaw wrote:

> >>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:
> 
>     GvR> Note that this (and other, similar issues) is all because
>     GvR> Python doesn't have a standard class hierarchy.
> 
> Or a formal interface mechanism.

Incidentally, jim/Zope is going forward with something like the interfaces
strawman - the "scarecrow" - that jim proposed at IPC?7?.  I don't know if
a PEP would have made any sense for 2.x, so maybe it's just as well we
haven't had time.  In the meanwhile, DC will get a chance to get
experience with and refine it... 

Anyway, for anyone that might be interested, i'm attaching a copy of
python/lib/Interfaces/README.txt from a recent Zope2 checkout.  I was
pretty enthusiastic about it when jim originally presented the scarecrow,
and on skimming it now it looks very cool.  (I'm not getting it all on my
quick peruse, and i suspect there's some contortions that wouldn't be
necessary if it were happening more closely coupled with python
development - but what jim sketches out is surprising sleek,
regardless...)

ken
klm@digicool.com

---1529346232-299838580-969387904=:24497
Content-Type: TEXT/plain; name="README.txt"
Content-Transfer-Encoding: BASE64
Content-ID: <Pine.LNX.4.21.0009191425040.24497@korak.digicool.com>
Content-Description: Interfaces README.txt - the Scarecrow
Content-Disposition: attachment; filename="README.txt"

UHl0aG9uIEludGVyZmFjZXMgLSBUaGUgU2NhcmVjcm93IEltcGxlbWVudGF0
aW9uDQoNClRoaXMgZG9jdW1lbnQgZGVzY3JpYmVzIG15IGltcGxlbWVudGF0
aW9uIG9mIHRoZSBQeXRob24gaW50ZXJmYWNlcw0Kc2NhcmVjcm93IHByb3Bv
c2FsLiAgDQoNClN0YXR1cw0KDQogIFRoaXMgaXMgYSBmaXJzdC1jdXQgaW1w
bGVtZW50YXRpb24gb2YgdGhlIHByb3Bvc2FsLiAgTXkgcHJpbWFyeSBnb2Fs
DQogIGlzIHRvIHNoZWQgbGlnaHQgb24gc29tZSBpZGVhcyBhbmQgdG8gcHJv
dmlkZSBhIGZyYW1ld29yayBmb3INCiAgY29uY3JldGUgZGlzY3Vzc2lvbi4N
Cg0KICBUaGlzIGltcGxlbWVudGF0aW9uIGhhcyBoYWQgbWluaW1hbCB0ZXN0
aW5nLiBJIGV4cGVjdCBtYW55IGFzcGVjdHMNCiAgb2YgdGhlIGltcGxlbWVu
dGF0aW9uIHRvIGV2b2x2ZSBvdmVyIHRpbWUuDQoNCkJhc2ljIGFzc3VtcHRp
b25zOg0KDQogIEludGVyZmFjZXMgYXJlICpub3QqIGNsYXNzZXM6DQogICAg
DQogICAgbyBJbnRlcmZhY2VzIGhhdmUgdGhlaXIgb3duICJoaWVyYXJjaHki
IChEQUcgcmVhbGx5KQ0KICAgIA0KICAgIG8gSW50ZXJmYWNlcyBhcmUgb2Jq
ZWN0cyB0aGF0IHByb3ZpZGUgYSBwcm90b2NvbCBmb3INCiAgICAJcXVlcnlp
bmcgYXR0cmlidXRlcyAoaW5jbHVkaW5nIG1ldGhvZHMpIGRlZmluZWQgYnkg
YW4NCiAgICAJYW4gaW50ZXJmYWNlOg0KICAgIA0KICAgIAkgIG5hbWVzKCkg
LS0gcmV0dXJuIGEgc2VxdWVuY2Ugb2YgZGVmaW5lZCBuYW1lcw0KICAgIA0K
ICAgIAkgIGdldERlc2NyaXB0aW9uRm9yKG5hbWUsIFtkZWZhdWx0XSkgLS0N
CiAgICAJICAgICBHZXQgYSBkZXNjcmlwdGlvbiBvZiBhIG5hbWUuDQogICAg
DQogICAgbyBZb3UgY2Fubm90IG1peCBpbnRlcmZhY2VzIGFuZCBjbGFzc2Vz
IGluIGJhc2UtY2xhc3MgbGlzdHMuDQogICAgDQogIFRoZXJlIGFyZSB1dGls
aXRpZXMgYW5kIG1ldGhvZHMgZm9yIGNvbXB1dGluZyBpbXBsaWVkIGludGVy
ZmFjZXMNCiAgZnJvbSBjbGFzc2VzIGFuZCBmb3IgY29tcHV0aW5nICJkZWZl
cnJlZCIgY2xhc3NlcyBmcm9tIGludGVyZmFjZXMuDQoNCiAgV2h5IGFyZW4n
dCBpbnRlcmZhY2UgY2xhc3Nlcz8gIEludGVyZmFjZXMgcGVyZm9ybSBhIGRp
ZmZlcmVudA0KICBmdW5jdGlvbiB0aGF0IGNsYXNzZXMuICBDbGFzc2VzIGFy
ZSBmb3Igc2hhcmluZyBpbXBsZW1lbnRhdGlvbi4NCiAgSW50ZXJmYWNlcyBh
cmUgZm9yIGRlbm90aW5nLCBkZWZpbmluZywgYW5kIGRvY3VtZW50aW5nIGFi
c3RyYWN0DQogIGJlaGF2aW9yLiAgICAgIA0KDQpEZXRhaWxzDQoNCiAgU29m
dHdhcmUgbGF5b3V0DQoNCiAgICBUaGVyZSBpcyBhbiAnSW50ZXJmYWNlJyBw
YWNrYWdlIHRoYXQgZXhwb3J0cyBhIHZhcmlldHkgb2YgdXNlZnVsDQogICAg
ZmFjaWxpdGllcy4gIFRoZXNlIGFyZSBkZXNjcmliZWQgYmVsb3cuDQoNCiAg
Q3JlYXRpbmcgSW50ZXJmYWNlcw0KDQogICAgSW50ZXJmYWNlcyBjYW4gYmUg
Y3JlYXRlZCBpbiBzZXZlcmFsIHdheXMuICBUaGUgY2xhc3Mgc3RhdGVtZW50
DQogICAgY2FuIGJlIHVzZWQgd2l0aCBvbmUgb3IgbW9yZSBpbnRlcmZhY2Vz
IHByb3ZpZGVkIGFzIGJhc2UgY2xhc3Nlcy4NCiAgICBUaGlzIGFwcHJvYWNo
IGlzIGNvbnZlbmllbnQsIHN5bnRhY3RpY2FsbHksIGFsdGhvdWdoIGl0IGlz
IGENCiAgICBsaXR0bGUgbWlzbGVhZGluZywgc2luY2UgaW50ZXJmYWNlcyBh
cmUgKm5vdCogY2xhc3Nlcy4gIEEgbWluaW1hbA0KICAgIGludGVyZmFjZSB0
aGF0IGNhbiBiZSB1c2VkIGFzIGEgYmFzZSBpcyBJbnRlcmZhY2UuQmFzZS4N
CiAgDQogICAgWW91IGNhbiBhbHNvIGNhbGwgSW50ZXJmYWNlLm5ldzoNCiAg
DQogICAgICBuZXcobmFtZSwgW2Jhc2VzLCBhdHRycywgX19kb2NfX10pIC0t
DQogIA0KCUNyZWF0ZSBhIG5ldyBpbnRlcmZhY2UuICBUaGUgYXJndW1lbnRz
IGFyZToNCiAgDQoJICBuYW1lIC0tIHRoZSBpbnRlcmZhY2UgbmFtZQ0KICAN
CgkgIGJhc2VzIC0tIGEgc2VxdWVuY2Ugb2YgImJhc2UiIGludGVyZmFjZXMu
ICBCYXNlIGludGVyZmFjZXMNCgkgICAgYXJlICJleHRlbmRlZCIgYnkgdGhl
IG5ldyBpbnRlcmZhY2UuDQogIA0KCSAgYXR0cnMgLS0gYW4gb2JqZWN0IHRo
YXQgY29uZm9ybXMgdG8NCgkgICAgJ0ludGVyZmFjZXMuU3RhbmRhcmQuRGlj
dGlvbmFyeScgdGhhdCBwcm92aWRlcyBhdHRyaWJ1dGVzDQoJICAgIGRlZmlu
ZWQgYnkgYW4gaW50ZXJmYWNlLiAgVGhlIGF0dHJpYnV0ZXMgc2hvdWxkIGJl
DQoJICAgICdJbnRlcmZhY2UuQXR0cmlidXRlIG9iamVjdHMnLg0KICANCiAg
ICBGaW5hbGx5IHlvdSBjYW4gY29tcHV0ZSBhbiBpbXBsaWVkIGludGVyZmFj
ZSBmcm9tIGEgY2xhc3MgYnkgY2FsbGluZw0KICAgICdJbnRlcmZhY2UuaW1w
bGllZEludGVyZmFjZSc6IA0KICANCiAgICAgIGltcGxpZWRJbnRlcmZhY2Uo
a2xhc3MsIFtfX25hbWVfXywgX19kb2NfX10pDQogIA0KCSBrbGFzcyAtLSBh
IGNsYXNzIGZyb20gd2hpY2ggdG8gY3JlYXRlIGFuIGludGVyZmFjZS4NCiAg
DQoJIF9fbmFtZV9fIC0tIFRoZSBuYW1lIG9mIHRoZSBpbnRlcmZhY2UuICBU
aGUgZGVmYXVsdCBuYW1lIGlzIHRoZQ0KCSAgICBjbGFzcyBuYW1lIHdpdGgg
dGhlIHN1ZmZpeCAiSW50ZXJmYWNlIi4NCiAgDQoJX19kb2NfXyAtLSBhIGRv
YyBzdHJpbmcgZm9yIHRoZSBpbnRlcmZhY2UuICBUaGUgZGVmYXVsdCBkb2MN
CgkgICAgc3RyaW5nIGlzIHRoZSBjbGFzcyBkb2Mgc3RyaW5nLg0KICANCglU
aGUgZ2VuZXJhdGVkIGludGVyZmFjZSBoYXMgYXR0cmlidXRlcyBmb3IgZWFj
aCBwdWJsaWMgbWV0aG9kDQoJZGVmaW5lZCBpbiBvciBpbmhlcml0ZWQgYnkg
dGhlIGludGVyZmFjZS4gQSBtZXRob2QgaXMgY29uc2lkZXJlZA0KCXB1Ymxp
YyBpZiBpdCBoYXMgYSBub24tZW1wdHkgZG9jIHN0cmluZyBhbmQgaWYgaXQn
cyBuYW1lIGRvZXMNCglub3QgYmVnaW4gd2l0aCAnXycgb3IgZG9lcyBiZWdp
biBhbmQgZW5kIHdpdGggJ19fJyBhbmQgaXMNCglncmVhdGVyIHRoYW4gNCBj
aGFyYWN0ZXJzIGluIGxlbmd0aC4NCiAgDQogICAgTm90ZSB0aGF0IGNvbXB1
dGluZyBhbiBpbnRlcmZhY2UgZnJvbSBhIGNsYXNzIGRvZXMgbm90IGF1dG9t
YXRpY2FsbHkNCiAgICBhc3NlcnQgdGhhdCB0aGUgY2xhc3MgaW1wbGVtZW50
cyBhbiBpbnRlcmZhY2UuDQoNCiAgICBIZXJlJ3MgYW4gZXhhbXBsZToNCg0K
ICAgICAgY2xhc3MgWDoNCg0KICAgICAgICBkZWYgZm9vKHNlbGYsIGEsIGIp
Og0KICAgICAgICAgIC4uLg0KDQogICAgICBYSW50ZXJmYWNlPUludGVyZmFj
ZS5pbXBsaWVkSW50ZXJmYWNlKFgpDQogICAgICBYLl9faW1wbGVtZW50c19f
PVhJbnRlcmZhY2UNCg0KICBJbnRlcmZhY2UgYXNzZXJ0aW9ucw0KDQogICAg
T2JqZWN0cyBjYW4gYXNzZXJ0IHRoYXQgdGhleSBpbXBsZW1lbnQgb25lIG9y
IG1vcmUgaW50ZXJmYWNlcy4NCiAgICBUaGV5IGRvIHRoaXMgYnkgYnkgZGVm
aW5pbmcgYW4gJ19faW50ZXJmYWNlc19fJyBhdHRyaWJ1dGUgdGhhdCBpcw0K
ICAgIGJvdW5kIHRvIGFuIGludGVyZmFjZSBhc3NlcnRpb24uDQoNCiAgICBB
biAqaW50ZXJmYWNlIGFzc2VydGlvbiogaXMgZWl0aGVyOiANCg0KICAgICAg
LSBhbiBJbnRlcmZhY2Ugb3INCg0KICAgICAgLSBhIHNlcXVlbmNlIG9mIGlu
dGVyZmFjZSBhc3NlcnRpb25zLg0KDQogICAgSGVyZSBhcmUgc29tZSBleGFt
cGxlcyBvZiBpbnRlcmZhY2UgYXNzZXJ0aW9uczoNCg0KICAgICAgSTENCg0K
ICAgICAgSTEsIEkyDQoNCiAgICAgIEkxLCAoSTIsIEkzKQ0KDQogICAgd2hl
cmUgSTEsIEkyLCBhbmQgSTMgYXJlIGludGVyZmFjZXMuDQoNCiAgICBDbGFz
c2VzIG1heSBwcm92aWRlIChkZWZhdWx0KSBhc3NlcnRpb25zIGZvciB0aGVp
ciBpbnN0YW5jZXMNCiAgICAoYW5kIHN1YmNsYXNzIGluc3RhbmNlcykuICBU
aGUgdXN1YWwgaW5oZXJpdGFuY2UgcnVsZXMgYXBwbHkuDQogICAgTm90ZSB0
aGF0IHRoZSBkZWZpbml0aW9uIG9mIGludGVyZmFjZSBhc3NlcnRpb25zIG1h
a2VzIGNvbXBvc2l0aW9uDQogICAgb2YgaW50ZXJmYWNlcyBzdHJhaWdodGZv
cndhcmQuICBGb3IgZXhhbXBsZToNCg0KICAgICAgY2xhc3MgQToNCg0KICAg
ICAgICBfX2ltcGxlbWVudHNfXyA9IEkxLCBJMiANCg0KICAgICAgICAuLi4N
Cg0KICAgICAgY2xhc3MgQg0KDQogICAgICAgIF9faW1wbGVtZW50c19fID0g
STMsIEk0DQoNCiAgICAgIGNsYXNzIEMoQS4gQik6DQogICAgICAgIC4uLg0K
DQogICAgICBjbGFzcyBEOg0KICAgICAgICANCiAgICAgICAgX19pbXBsZW1l
bnRzX18gPSBJNQ0KDQogICAgICBjbGFzcyBFOg0KDQogICAgICAgIF9faW1w
bGVtZW50c19fID0gSTUsIEEuX19pbXBsZW1lbnRzX18NCiAgICAgIA0KICBT
cGVjaWFsLWNhc2UgaGFuZGxpbmcgb2YgY2xhc3Nlcw0KDQogICAgU3BlY2lh
bCBoYW5kbGluZyBpcyByZXF1aXJlZCBmb3IgUHl0aG9uIGNsYXNzZXMgdG8g
bWFrZSBhc3NlcnRpb25zDQogICAgYWJvdXQgdGhlIGludGVyZmFjZXMgYSBj
bGFzcyBpbXBsZW1lbnRzLCBhcyBvcHBvc2VkIHRvIHRoZQ0KICAgIGludGVy
ZmFjZXMgdGhhdCB0aGUgaW5zdGFuY2VzIG9mIHRoZSBjbGFzcyBpbXBsZW1l
bnQuICBZb3UgY2Fubm90DQogICAgc2ltcGx5IGRlZmluZSBhbiAnX19pbXBs
ZW1lbnRzX18nIGF0dHJpYnV0ZSBmb3IgdGhlIGNsYXNzIGJlY2F1c2UNCiAg
ICBjbGFzcyAiYXR0cmlidXRlcyIgYXBwbHkgdG8gaW5zdGFuY2VzLg0KDQog
ICAgQnkgZGVmYXVsdCwgY2xhc3NlcyBhcmUgYXNzdW1lZCB0byBpbXBsZW1l
bnQgdGhlIEludGVyZmFjZS5TdGFuZGFyZC5DbGFzcw0KICAgIGludGVyZmFj
ZS4gIEEgY2xhc3MgbWF5IG92ZXJyaWRlIHRoZSBkZWZhdWx0IGJ5IHByb3Zp
ZGluZyBhDQogICAgJ19fY2xhc3NfaW1wbGVtZW50c19fJyBhdHRyaWJ1dGUg
d2hpY2ggd2lsbCBiZSB0cmVhdGVkIGFzIGlmIGl0IHdlcmUNCiAgICB0aGUg
J19faW1wbGVtZW50c19fJyBhdHRyaWJ1dGUgb2YgdGhlIGNsYXNzLg0KDQog
IFRlc3RpbmcgYXNzZXJ0aW9ucw0KDQogICAgWW91IGNhbiB0ZXN0IHdoZXRo
ZXIgYW4gb2JqZWN0IGltcGxlbWVudHMgYW4gaW50ZXJmYWNlIGJ5IGNhbGxp
bmcNCiAgICB0aGUgJ2ltcGxlbWVudGVkQnknIG1ldGhvZCBvZiB0aGUgaW50
ZXJmYWNlIGFuZCBwYXNzaW5nIHRoZQ0KICAgIG9iamVjdDo6DQoNCiAgICAg
IEkxLmltcGxlbWVudGVkQnkoeCkNCg0KICAgIFNpbWlsYXJseSwgeW91IGNh
biB0ZXN0IHdoZXRoZXIsIGJ5IGRlZmF1bHQsIGluc3RhbmNlcyBvZiBhIGNs
YXNzDQogICAgaW1wbGVtZW50IGFuIGludGVyZmFjZSBieSBjYWxsaW5nIHRo
ZSAnaW1wbGVtZW50ZWRCeUluc3RhbmNlc09mJw0KICAgIG1ldGhvZCBvbiB0
aGUgaW50ZXJmYWNlIGFuZCBwYXNzaW5nIHRoZSBjbGFzczo6DQogIA0KICAg
ICAgSTEuaW1wbGVtZW50ZWRCeUluc3RhbmNlc09mKEEpDQoNCiAgVGVzdGlu
ZyBpbnRlcmZhY2VzDQoNCiAgICBZb3UgY2FuIHRlc3Qgd2hldGhlciBvbmUg
aW50ZXJmYWNlIGV4dGVuZHMgYW5vdGhlciBieSBjYWxsaW5nIHRoZQ0KICAg
IGV4dGVuZHMgbWV0aG9kIG9uIGFuIGludGVyZmFjZToNCg0KICAgICAgSTEu
ZXh0ZW5kcyhJMikNCg0KICAgIE5vdGUgdGhhdCBhbiBpbnRlcmZhY2UgZG9l
cyBub3QgZXh0ZW5kIGl0c2VsZi4NCg0KICBJbnRlcmZhY2UgYXR0cmlidXRl
cw0KDQogICAgVGhlIHB1cnBvc2Ugb2YgYW4gaW50ZXJmYWNlIGlzIHRvIGRl
c2NyaWJlIGJlaGF2aW9yLCBub3QgdG8NCiAgICBwcm92aWRlIGltcGxlbWVu
dGF0aW9uLiAgSW4gYSBzaW1pbGFyIGZhc2hpb24gdGhlIGF0dHJpYnV0ZXMg
b2YNCiAgICBhbiBpbnRlcmZhY2UgZGVzY3JpYmUgYW5kIGRvY3VtZW50IHRo
ZSBhdHRyaWJ1dGVzIHByb3ZpZGVkIGJ5IGFuDQogICAgb2JqZWN0IHRoYXQg
aW1wbGVtZW50cyB0aGUgaW50ZXJmYWNlLg0KDQogICAgVGhlcmUgYXJlIGN1
cnJlbnRseSB0d28ga2luZHMgb2Ygc3VwcG9ydGVkIGF0dHJpYnV0ZXM6DQoN
CiAgICAgIEludGVyZmFjZS5BdHRyaWJ1dGUgLS0gVGhlIG9iamVjdHMgZGVz
Y3JpYmUgaW50ZXJmYWNlDQogICAgICAgIGF0dHJpYnV0ZXMuICBUaGV5IGRl
ZmluZSBhdCBsZWFzdCBuYW1lcyBhbmQgZG9jIHN0cmluZ3MgYW5kDQogICAg
ICAgIG1heSBkZWZpbmUgb3RoZXIgaW5mb3JtYXRpb24gYXMgd2VsbC4NCg0K
ICAgICAgSW50ZXJmYWNlLk1ldGhvZCAtLSBUaGVzZSBhcmUgaW50ZXJmYWNl
IGF0dHJpYnV0ZXMgdGhhdA0KICAgICAgICBkZXNjcmliZSBtZXRob2RzLiAg
VGhleSAqbWF5KiBkZWZpbmUgaW5mb3JtYXRpb24gYWJvdXQgbWV0aG9kDQog
ICAgICAgIHNpZ25hdHVyZXMuIChOb3RlIE1ldGhvZHMgYXJlIGtpbmRzIG9m
IEF0dHJpYnV0ZXMuKQ0KDQogICAgV2hlbiBhIGNsYXNzIHN0YXRlbWVudCBp
cyB1c2VkIHRvIGRlZmluZSBhbiBpbnRlcmZhY2UsIG1ldGhvZA0KICAgIGRl
ZmluaXRpb25zIG1heSBiZSBwcm92aWRlZC4gIFRoZXNlIGdldCBjb252ZXJ0
ZWQgdG8gTWV0aG9kDQogICAgb2JqZWN0cyBkdXJpbmcgaW50ZXJmYWNlIGNy
ZWF0aW9uLiAgRm9yIGV4YW1wbGU6DQoNCiAgICAgIGNsYXNzIEkxKEludGVy
ZmFjZS5CYXNlKToNCiAgICAgICAgIA0KICAgICAgICBfX25hbWVfXz1BdHRy
aWJ1dGUoIlRoZSBvYmplY3QncyBuYW1lIikNCg0KICAgICAgICBkZWYgZm9v
KHNlbGYsIGEsIGIpOg0KICAgICAgICAgICAiYmxhaCBibGFoIg0KDQogICAg
ZGVmaW5lcyBhbiBpbnRlcmZhY2UsICdJMScgdGhhdCBoYXMgdHdvIGF0dHJp
YnV0ZXMsICdfX25hbWVfXycgYW5kDQogICAgJ2ZvbycuIFRoZSBhdHRyaWJ1
dGUgJ2ZvbycgaXMgYSBNZXRob2QgaW5zdGFuY2UuICBJdCBpcyAqbm90KiBh
DQogICAgUHl0aG9uIG1ldGhvZC4NCg0KICAgIEl0IGlzIG15IGV4cGVjdGF0
aW9uIHRoYXQgQXR0cmlidXRlIG9iamVjdHMgd2lsbCBldmVudHVhbGx5IGJl
DQogICAgYWJsZSB0byBwcm92aWRlIGFsbCBzb3J0cyBvZiBpbnRlcmVzdGlu
ZyBtZXRhLWRhdGEuICANCg0KICBEZWZlcnJlZCBjbGFzc2VzDQoNCiAgICBZ
b3UgY2Fubm90IHVzZSBpbnRlcmZhY2VzIGFzIGJhc2UgY2xhc3Nlcy4gIFlv
dSBjYW4sIGhvd2V2ZXIsIA0KICAgIGNyZWF0ZSAiZGVmZXJyZWQiIGNsYXNz
ZXMgZnJvbSBhbiBpbnRlcmZhY2U6DQoNCiAgICAgIGNsYXNzIFN0YWNrSW50
ZXJmYWNlKEludGVyZmFjZS5CYXNlKToNCg0KICAgICAgICAgZGVmIHB1c2go
c2VsZiwgdik6DQogICAgICAgICAgICAiQWRkIGEgdmFsdWUgdG8gdGhlIHRv
cCBvZiBhIHN0YWNrIg0KDQogICAgICAgICBkZWYgcG9wKHNlbGYpOg0KICAg
ICAgICAgICAgIlJlbW92ZSBhbmQgcmV0dXJuIGFuIG9iamVjdCBmcm9tIHRo
ZSB0b3Agb2YgdGhlIHN0YWNrIg0KDQogICAgICBjbGFzcyBTdGFjayhTdGFj
a0ludGVyZmFjZS5kZWZlcnJlZCgpKToNCiAgICAgICAgICJUaGlzIGlzIHN1
cHBvc2VkIHRvIGltcGxlbWVudCBhIHN0YWNrIg0KDQogICAgICAgICBfX2lt
cGxlbWVudHNfXz1TdGFja0ludGVyZmFjZQ0KDQogICAgQXR0ZW1wdHMgdG8g
Y2FsbCBtZXRob2RzIGluaGVyaXRlZCBmcm9tIGEgZGVmZXJyZWQgY2xhc3Mg
d2lsbA0KICAgIHJhaXNlIEludGVyZmFjZS5Ccm9rZW5JbXBsZW1lbnRhdGlv
biBleGNlcHRpb25zLg0KDQogIFRyaWFsIGJhbGxvb246IGFic3RyYWN0IGlt
cGxlbWVudGF0aW9ucw0KDQogICAgVGltIFBldGVycyBoYXMgZXhwcmVzc2Vk
IHRoZSBkZXNpcmUgdG8gcHJvdmlkZSBhYnN0cmFjdA0KICAgIGltcGxlbWVu
dGF0aW9ucyBpbiBpbnRlcmZhY2UgZGVmaW5pdGlvbnMsIHdoZXJlLCBwcmVz
dW1hYmx5LCBhbg0KICAgIGFic3RyYWN0IGltcGxlbWVudGF0aW9uIHVzZXMg
b25seSBmZWF0dXJlcyBkZWZpbmVkIGJ5IHRoZQ0KICAgIGludGVyZmFjZS4N
Cg0KICAgIFBlcmhhcHMgaWYgYSBtZXRob2QgZGVmaW5pdGlvbiBoYXMgYSBi
b2R5IChvdGhlciB0aGFuIGEgZG9jDQogICAgc3RyaW5nKSwgdGhlbiB0aGUg
Y29ycmVzcG9uZGluZyBtZXRob2QgaW4gdGhlIGRlZmVycmVkIGNsYXNzDQog
ICAgd2lsbCBub3QgYmUgZGVmZXJyZWQuIFRoaXMgd291bGQgbm90IGJlIGhh
cmQgdG8gZG8gaW4gQ1B5dGhvbg0KICAgIGlmIEkgY2hlYXQgYW5kIHNuaWZm
IGF0IG1ldGhvZCBieXRlY29kZXMuDQoNCiAgICBGb3IgZXhhbXBsZToNCg0K
ICAgICAgY2xhc3MgTGlzdEludGVyZmFjZShJbnRlcmZhY2UuU3RhbmRhcmQu
TXV0YWJsZVNlcXVlbmNlKToNCg0KICAgICAgICBkZWYgYXBwZW5kKHNlbGYs
IHYpOg0KICAgICAgICAgICAiYWRkIGEgdmFsdWUgdG8gdGhlIGVuZCBvZiB0
aGUgb2JqZWN0Ig0KDQoJZGVmIHB1c2goc2VsZiwgdik6DQogICAgICAgICAg
ICJhZGQgYSB2YWx1ZSB0byB0aGUgZW5kIG9mIHRoZSBvYmplY3QiDQogICAg
ICAgICAgIHNlbGYuYXBwZW5kKHYpDQoNCiAgICAgIExpc3RCYXNlPUxpc3RJ
bnRlcmZhY2UuZGVmZXJyZWQoKQ0KDQogICAgICBjbGFzcyBMaXN0SW1wbGVt
ZW50ZXIoTGlzdGJhc2UpOg0KICAgICAgICAgZGVmIGFwcGVuZChzZWxmLCB2
KTogLi4uDQoNCiAgICBJbiB0aGlzIGV4YW1wbGUsIHdlIGNhbiBjcmVhdGUg
YSBiYXNlIGNsYXNzLCBMaXN0QmFzZSwgdGhhdCBwcm92aWRlcyBhbg0KICAg
IGFic3RyYWN0IGltcGxlbWVudGF0aW9uIG9mICdwdXNoJyBhbmQgYW4gaW1w
bGVtZW50YXRpb24gb2YgYXBwZW5kDQogICAgdGhhdCByYWlzZXMgYW4gZXJy
b3IgaWYgbm90IG92ZXJyaWRkZW4uDQoNCiAgU3RhbmRhcmQgaW50ZXJmYWNl
cw0KDQogICAgVGhlIG1vZHVsZSBJbnRlcmZhY2UuU3RhbmRhcmQgZGVmaW5l
cyBpbnRlcmZhY2VzIGZvciBzdGFuZGFyZA0KICAgIHB5dGhvbiBvYmplY3Rz
Lg0KDQogICAgVGhpcyBtb2R1bGUgYW5kIHRoZSBtb2R1bGVzIGl0IHVzZXMg
bmVlZCBhIGxvdCBtb3JlIHdvcmshDQoNCiAgSGFuZGxpbmcgZXhpc3Rpbmcg
YnVpbHQtaW4gdHlwZXMNCg0KICAgIEEgaGFjayBpcyBwcm92aWRlZCB0byBh
bGxvdyBpbXBsZW1lbnRhdGlvbiBhc3NlcnRpb25zIHRvIGJlIG1hZGUNCiAg
ICBmb3IgYnVpbHRpbiB0eXBlcy4gIEludGVyZmFjZXMuYXNzZXJ0VHlwZUlt
cGxlbWVudHMgY2FuIGJlIGNhbGxlZA0KICAgIHRvIGFzc2VydCB0aGF0IGlu
c3RhbmNlcyBvZiBhIGJ1aWx0LWluIHR5cGUgaW1wbGVtZW50IG9uZSBvciBt
b3JlDQogICAgaW50ZXJmYWNlczo6DQoNCiAgICAgICBVdGlsLmFzc2VydFR5
cGVJbXBsZW1lbnRzKA0KICAgICAgICAgdHlwZSgxTCksIA0KICAgICAgICAg
KEFicml0cmFyeVByZWNpc2lvbiwgQml0TnVtYmVyLCBTaWduZWQpKQ0KDQpJ
c3N1ZXMNCg0KICBvIFdoYXQgc2hvdWxkIHRoZSBvYmplY3RzIHRoYXQgZGVm
aW5lIGF0dHJpYnV0ZXMgbG9vayBsaWtlPw0KICAgIFRoZXkgc2hvdWxkbid0
ICpiZSogdGhlIGF0dHJpYnV0ZXMsIGJ1dCBzaG91bGQgZGVzY3JpYmUgdGhl
DQogICAgdGhlIGF0dHJpYnV0ZXMuDQoNCiAgICBOb3RlIHRoYXQgd2UndmUg
bWFkZSBhIGZpcnN0IGN1dCB3aXRoICdBdHRyaWJ1dGUnIGFuZA0KICAgICdN
ZXRob2QnIG9iamVjdHMuDQoNCiAgICBOb3RlIHRoYXQgdGhlIGluZm9ybWF0
aW9uIGNvbnRhaW5lZCBpbiBhIG5vbi1tZXRob2QgYXR0cmlidXRlDQogICAg
b2JqZWN0IG1pZ2h0IGNvbnRhaW4gdGhlIGF0dHJpYnV0ZSB2YWx1ZSdzIGlu
dGVyZmFjZSBhcyB3ZWxsIGFzDQogICAgb3RoZXIgaW5mb3JtYXRpb24sIHN1
Y2ggYXMgYW4gYXR0cmlidXRlJ3MgdXNhZ2UuDQoNCiAgbyBUaGVyZSBhcmUg
cGxhY2VzIGluIHRoZSBjdXJyZW50IGltcGxlbWVudGF0aW9uIHRoYXQgdXNl
DQogICAgJ2lzaW5zdGFuY2UnIHRoYXQgc2hvdWxkIGJlIGNoYW5nZWQgdG8g
dXNlIGludGVyZmFjZQ0KICAgIGNoZWNrcy4NCg0KICBvIFdoZW4gdGhlIGlu
dGVyZmFjZSBpbnRlcmZhY2VzIGFyZSBmaW5hbGl6ZWQsIEMgaW1wbGVtZW50
YXRpb25zDQogICAgd2lsbCBiZSBoaWdobHkgZGVzaXJhYmxlIGZvciBwZXJm
b3JtYW5jZSByZWFzb25zLg0KDQogIG8gQSBsb3QgbW9yZSB3b3JrIGlzIG5l
ZWRlZCBvbiB0aGUgc3RhbmRhcmQgaW50ZXJmYWNlIGhpZXJhcmNoeS4gICAg
DQoNCiAgLi4uDQo=
---1529346232-299838580-969387904=:24497--

From martin@loewis.home.cs.tu-berlin.de  Tue Sep 19 21:48:53 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 19 Sep 2000 22:48:53 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de>

> I doubt that we can fix all Unicode related bugs in the 2.0
> stdlib before the final release... let's make this a project 
> for 2.1.

Exactly my feelings. Since we cannot possibly fix all problems, we may
need to change the behaviour later.

If we now silently do the wrong thing, silently changing it to the
then-right thing in 2.1 may break peoples code. So I'm asking that
cases where it does not clearly do the right thing produces an
exception now; we can later fix it to accept more cases, should need
occur.

In the specific case, dropping support for Unicode output in binary
files is the right thing. We don't know what the user expects, so it
is better to produce an exception than to silently put incorrect bytes
into the stream - that is a bug that we still can fix.

The easiest way with the clearest impact is to drop the buffer
interface in unicode objects. Alternatively, not supporting them in
for s# also appears reasonable. Users experiencing the problem in
testing will then need to make an explicit decision how they want to
encode the Unicode objects.

If any expedition of the issue is necessary, I can submit a bug report,
and propose a patch.

Regards,
Martin

From guido@beopen.com  Tue Sep 19 23:00:34 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 17:00:34 -0500
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: Your message of "Tue, 19 Sep 2000 22:48:53 +0200."
 <200009192048.WAA01414@loewis.home.cs.tu-berlin.de>
References: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de>
Message-ID: <200009192200.RAA01853@cj20424-a.reston1.va.home.com>

> > I doubt that we can fix all Unicode related bugs in the 2.0
> > stdlib before the final release... let's make this a project 
> > for 2.1.
> 
> Exactly my feelings. Since we cannot possibly fix all problems, we may
> need to change the behaviour later.
> 
> If we now silently do the wrong thing, silently changing it to the
> then-right thing in 2.1 may break peoples code. So I'm asking that
> cases where it does not clearly do the right thing produces an
> exception now; we can later fix it to accept more cases, should need
> occur.
> 
> In the specific case, dropping support for Unicode output in binary
> files is the right thing. We don't know what the user expects, so it
> is better to produce an exception than to silently put incorrect bytes
> into the stream - that is a bug that we still can fix.
> 
> The easiest way with the clearest impact is to drop the buffer
> interface in unicode objects. Alternatively, not supporting them in
> for s# also appears reasonable. Users experiencing the problem in
> testing will then need to make an explicit decision how they want to
> encode the Unicode objects.
> 
> If any expedition of the issue is necessary, I can submit a bug report,
> and propose a patch.

Sounds reasonable to me (but I haven't thought of all the issues).

For writing binary Unicode strings, one can use

  f.write(u.encode("utf-16"))		# Adds byte order mark
  f.write(u.encode("utf-16-be"))	# Big-endian
  f.write(u.encode("utf-16-le"))	# Little-endian

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From mal@lemburg.com  Tue Sep 19 22:29:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 23:29:06 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of
 unicode - comments please
References: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de> <200009192200.RAA01853@cj20424-a.reston1.va.home.com>
Message-ID: <39C7DAA2.A04E5008@lemburg.com>

Guido van Rossum wrote:
> 
> > > I doubt that we can fix all Unicode related bugs in the 2.0
> > > stdlib before the final release... let's make this a project
> > > for 2.1.
> >
> > Exactly my feelings. Since we cannot possibly fix all problems, we may
> > need to change the behaviour later.
> >
> > If we now silently do the wrong thing, silently changing it to the
> > then-right thing in 2.1 may break peoples code. So I'm asking that
> > cases where it does not clearly do the right thing produces an
> > exception now; we can later fix it to accept more cases, should need
> > occur.
> >
> > In the specific case, dropping support for Unicode output in binary
> > files is the right thing. We don't know what the user expects, so it
> > is better to produce an exception than to silently put incorrect bytes
> > into the stream - that is a bug that we still can fix.
> >
> > The easiest way with the clearest impact is to drop the buffer
> > interface in unicode objects. Alternatively, not supporting them in
> > for s# also appears reasonable. Users experiencing the problem in
> > testing will then need to make an explicit decision how they want to
> > encode the Unicode objects.
> >
> > If any expedition of the issue is necessary, I can submit a bug report,
> > and propose a patch.
> 
> Sounds reasonable to me (but I haven't thought of all the issues).
> 
> For writing binary Unicode strings, one can use
> 
>   f.write(u.encode("utf-16"))           # Adds byte order mark
>   f.write(u.encode("utf-16-be"))        # Big-endian
>   f.write(u.encode("utf-16-le"))        # Little-endian

Right.

Possible ways to fix this:

1. disable Unicode's getreadbuf slot

   This would effectively make Unicode object unusable for
   all APIs which use "s#"... and probably give people a lot
   of headaches. OTOH, it would probably motivate lots of
   users to submit patches for the stdlib which makes it
   Unicode aware (hopefully ;-)

2. same as 1., but also make "s#" fall back to getcharbuf
   in case getreadbuf is not defined

   This would make Unicode objects compatible with "s#", but
   still prevent writing of binary data: getcharbuf returns
   the Unicode object encoded using the default encoding which
   is ASCII per default.

3. special case "s#" in some way to handle Unicode or to
   raise an exception pointing explicitly to the problem
   and its (possible) solution

I'm not sure which of these paths to take. Perhaps solution
2. is the most feasable compromise between "exceptions everywhere"
and "encoding confusion".

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From guido@beopen.com  Tue Sep 19 23:47:11 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 17:47:11 -0500
Subject: [Python-Dev] Missing API in re modle
Message-ID: <200009192247.RAA02122@cj20424-a.reston1.va.home.com>

When investigating and fixing Tim's report that the Replace dialog in
IDLE was broken, I realized that there's an API missing from the re
module.

For search-and-replace, IDLE uses a regular expression to find the
next match, and then needs to do whatever sub() does to that match.
But there's no API to spell "whatever sub() does"!  It's not safe to
call sub() on just the matching substring -- the match might depend on
context.

It seems that a new API is needed.  I propose to add the following
method of match objects:

  match.expand(repl)

    Return the string obtained by doing backslash substitution as for
    the sub() method in the replacement string: expansion of \n ->
    linefeed etc., and expansion of numeric backreferences (\1, \2,
    ...) and named backreferences (\g<1>, \g<name>, etc.);
    backreferences refer to groups in the match object.

Or am I missing something and is there already a way to do this?

(Side note: the SRE code does some kind of compilation on the
replacement template; I'd like to see this cached, as otherwise IDLE's
replace-all button will take forever...)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From thomas@xs4all.net  Wed Sep 20 14:23:10 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 20 Sep 2000 15:23:10 +0200
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <20000917144614.A25718@ActiveState.com>; from trentm@ActiveState.com on Sun, Sep 17, 2000 at 02:46:14PM -0700
References: <20000917142718.A25180@ActiveState.com> <20000917144614.A25718@ActiveState.com>
Message-ID: <20000920152309.A6675@xs4all.nl>

On Sun, Sep 17, 2000 at 02:46:14PM -0700, Trent Mick wrote:
> On Sun, Sep 17, 2000 at 02:27:18PM -0700, Trent Mick wrote:
> > 
> > I get the following error trying to import _tkinter in a Python 2.0 build:
> > 
> > > ./python
> > ./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory
> > 

> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to /usr/local/lib)
> and everything is hunky dory. I presumed that /usr/local/lib would be
> on the default search path for shared libraries. Bad assumption I guess.

On *some* ELF systems (at least Linux and BSDI) you can add /usr/local/lib
to /etc/ld.so.conf and rerun 'ldconfig' (which builds the cachefile
/etc/ld.so.cache, which is used as the 'searchpath'.) I personally find this
a lot better approach than the LD_LIBRARY_PATH or -R/-rpath approaches,
especially for 'system-wide' shared libraries (you can use one of the other
approaches if you want to tie a specific binary to a specific shared library
in a specific directory, or have a binary use a different shared library
(from a different directory) in some of the cases -- though you can use
LD_PRELOAD and such for that as well.)

If you tie your binary to a specific directory, you might lose portability,
necessitating ugly script-hacks that find & set a proper LD_LIBRARY_PATH or
LD_PRELOAD and such before calling the real program. I'm not sure if recent
SunOS's support something like ld.so.conf, but old ones didn't, and I sure
wish they did ;)

Back-from-vacation-and-trying-to-catch-up-on-2000+-mails-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!

From mal@lemburg.com  Wed Sep 20 15:22:44 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 20 Sep 2000 16:22:44 +0200
Subject: [Python-Dev] Python syntax checker ?
Message-ID: <39C8C834.5E3B90E7@lemburg.com>

Would it be possible to write a Python syntax checker that doesn't
stop processing at the first error it finds but instead tries
to continue as far as possible (much like make -k) ?

If yes, could the existing Python parser/compiler be reused for
such a tool ?

I was asked to write a tool which checks Python code and returns
a list of found errors (syntax error and possibly even some
lint warnings) instead of stopping at the first error it finds.

Thanks for any tips,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From loewis@informatik.hu-berlin.de  Wed Sep 20 18:07:06 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Wed, 20 Sep 2000 19:07:06 +0200 (MET DST)
Subject: [Python-Dev] Python syntax checker ?
Message-ID: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>

> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries to
> continue as far as possible (much like make -k) ?

In "Compilerbau", this is referred to as "Fehlerstabilisierung". I
suggest to have a look at the dragon book (Aho, Seti, Ullman).

The common approch is to insert or remove tokens, using some
heuristics. In YACC, it is possible to add error productions to the
grammar. Whenever an error occurs, the parser assigns all tokens to
the "error" non-terminal until it concludes that it can perform a
reduce action.

A similar approach might work for the Python Grammar. For each
production, you'd define a set of stabilization tokens. If these are
encountered, then the rule would be considered complete. Everything is
consumed until a stabilization token is found.

For example, all expressions could be stabilized with a
keyword. I.e. if you encounter a syntax error inside an expression,
you ignore all tokens until you see 'print', 'def', 'while', etc.

In some cases, it may be better to add input rather than removing
it. For example, if you get an "inconsistent dedent" error, you could
assume that this really was a consistent dedent, or you could assume
it was not meant as a dedent at all. Likewise, if you get a
single-quote start-of-string, with no single-quote until end-of-line,
you just should assume there was one.

Adding error productions to ignore input until stabilization may be
feasible on top of the existing parser. Adding tokens in the right
place is probably harder - I'd personally go for a pure Python
solution, that operates on Grammar/Grammar.

Regards,
Martin


From tismer@appliedbiometrics.com  Wed Sep 20 17:35:50 2000
From: tismer@appliedbiometrics.com (Christian Tismer)
Date: Wed, 20 Sep 2000 19:35:50 +0300
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009082048.WAA14671@python.inrialpes.fr> <39B951CC.3C0AE801@lemburg.com>
Message-ID: <39C8E766.18D9BDD8@appliedbiometrics.com>


"M.-A. Lemburg" wrote:
> 
> Vladimir Marangozov wrote:
> >
> > M.-A. Lemburg wrote:
> > >
> > > Fredrik Lundh wrote:
> > > >
> > > > mal wrote:

...

> > Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.
> 
> Oh, I didn't try to reduce Fredrik's efforts at all. To the
> contrary: I'm still looking forward to his melted down version
> of the database and the ctype tables.

Howdy. It may be that not you but I will melt /F's efforts
to dust, since I might have one or two days of time
to finish my long time ago promised code generator :-)
Well, probably just merging our dust :-)

> > Every bit costs money, and that's why
> > Van Jacobson packet-header compression has been invented and is
> > massively used. Whole armies of researchers are currently trying to
> > compensate the irresponsible bloatware that people of the higher
> > layers are imposing on them <wink>. Careful!
> 
> True, but why the hurry ?

I have no reason to complain since I didn't do my homework.
Anyway, a partially bloated distribution might be harmful
for Python's reputation. When looking through the whole
source set, there is no bloat anywhere. Everything is
well thought out, and fairly optimized between space and speed.
Well, there is this one module which cries for being replaced,
and which still prevents *me* from moving to Python 1.6 :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com

From martin@loewis.home.cs.tu-berlin.de  Wed Sep 20 20:22:24 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 20 Sep 2000 21:22:24 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
Message-ID: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>

I just tried to disable the getreadbufferproc on Unicode objects. Most
of the test suite continues to work. 

test_unicode fails, which is caused by "s#" not working anymore when
in readbuffer_encode when testing the unicode_internal encoding. That
could be fixed (*).

More concerning, sre fails when matching a unicode string. sre uses
the getreadbufferproc to get to the internal representation. If it has
sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
buffer (?!?).

I'm not sure what the right solution would be in this case: I *think*
sre should have more specific knowledge of Unicode objects, so it
should support objects with a buffer interface representing a 1-byte
character string, or Unicode objects. Actually, is there anything
wrong with sre operating on string and unicode objects only? It
requires that the buffer has a single segment, anyway...

Regards,
Martin

(*) The 'internal encoding' function should directly get to the
representation of the unicode object, and readbuffer_encode could
become Python:

def readbuffer_encode(o,errors="strict"):
  b = buffer(o)
  return str(b),len(b)

or be removed altogether, as it would (rightfully) stop working on
unicode objects.

From Fredrik Lundh" <effbot@telia.com  Wed Sep 20 20:57:16 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 20 Sep 2000 21:57:16 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>
Message-ID: <021801c0233c$fec04fc0$766940d5@hagrid>

martin wrote:
> More concerning, sre fails when matching a unicode string. sre uses
> the getreadbufferproc to get to the internal representation. If it has
> sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
> buffer (?!?).

...or an integer buffer.

(who says you can only use regular expressions on character
strings? ;-)

> I'm not sure what the right solution would be in this case: I *think*
> sre should have more specific knowledge of Unicode objects, so it
> should support objects with a buffer interface representing a 1-byte
> character string, or Unicode objects. Actually, is there anything
> wrong with sre operating on string and unicode objects only?

let's add a special case for unicode strings.  I'm actually using
the integer buffer support (don't ask), so I'd prefer to leave it
in there.

no time tonight, but I can check in a fix tomorrow.

</F>


From thomas@xs4all.net  Wed Sep 20 21:02:48 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 20 Sep 2000 22:02:48 +0200
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Wed, Sep 20, 2000 at 07:07:06PM +0200
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <20000920220248.E6675@xs4all.nl>

On Wed, Sep 20, 2000 at 07:07:06PM +0200, Martin von Loewis wrote:
> Adding error productions to ignore input until stabilization may be
> feasible on top of the existing parser. Adding tokens in the right
> place is probably harder - I'd personally go for a pure Python
> solution, that operates on Grammar/Grammar.

Don't forget that there are two kinds of SyntaxErrors in Python: those that
are generated by the tokenizer/parser, and those that are actually generated
by the (bytecode-)compiler. (inconsistent indent/dedent errors, incorrect
uses of (augmented) assignment, incorrect placing of particular keywords,
etc, are all generated while actually compiling the code.) Also, in order to
be really useful, the error-indicator would have to be pretty intelligent.
Imagine something like this:

if 1:

     doodle()

    forever()
    and_ever()
    <tons more code using 4-space indent>

With the current interpreter, that would generate a single warning, on the
line below the one that is the actual problem. If you continue searching for
errors, you'll get tons and tons of errors, all because the first line was
indented too far.

An easy way to work around it is probably to consider all tokenizer-errors
and some of the compiler-generated errors (like indent/dedent ones) as
really-fatal errors, and only handle the errors that are likely to managable
errors, skipping over the affected lines or considering them no-ops.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!

From martin@loewis.home.cs.tu-berlin.de  Wed Sep 20 21:50:30 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 20 Sep 2000 22:50:30 +0200
Subject: [Python-Dev] [ Bug #110676 ] fd.readlines() hangs (via popen3) (PR#385)
Message-ID: <200009202050.WAA02298@loewis.home.cs.tu-berlin.de>

I've closed your report at

http://sourceforge.net/bugs/?func=detailbug&bug_id=110676&group_id=5470

That is a bug in the application code. The slave tries to write 6000
bytes to stderr, and blocks after writing 4096 (number measured on
Linux, more generally, after _PC_PIPE_BUF bytes).  The server starts
reading on stdin, and blocks also, so you get a deadlock.  The proper
solution is to use 

import popen2

r,w,e = popen2.popen3 ( 'python slave.py' ) 
e.readlines() 
r.readlines() 
r.close() 
e.close() 
w.close() 

as the master, and 

import sys,posix 

e = sys.stderr.write 
w = sys.stdout.write 

e(400*'this is a test\n') 
posix.close(2) 
w(400*'this is another test\n') 

as the slave. Notice that stderr must be closed after writing all
data, or readlines won't return. Also notice that posix.close must be
used, as sys.stderr.close() won't close stderr (apparently due to
concerns that assigning to sys.stderr will silently close is, so no
further errors can be printed).

In general, it would be better to use select(2) on the files returned
from popen3, or spread the reading of the individual files onto
several threads.

Regards,
Martin

From MarkH@ActiveState.com  Thu Sep 21 00:37:31 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 21 Sep 2000 10:37:31 +1100
Subject: [Python-Dev] FW: [humorix] Unobfuscated Perl Code Contest
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEFJDKAA.MarkH@ActiveState.com>

And now for something completely different ;-)
--
Unobfuscated Perl Code Contest
September 16, 19100

The Perl Gazette has announced the winners in the First
Annual _Un_obfuscated Perl Code Contest.  First place went
to Edwin Fuller, who submitted this unobfuscated program:

#!/usr/bin/perl
print "Hello world!\n";

"This was definitely a challenging contest," said an
ecstatic Edwin Fuller. "I've never written a Perl program
before that didn't have hundreds of qw( $ @ % & * | ? / \ !
# ~ ) symbols.  I really had to summon all of my
programming skills to produce an unobfuscated program."

The judges in the contest learned that many programmers
don't understand the meaning of 'unobfuscated perl'.  For
instance, one participant sent in this 'Hello world!'
program:

#!/usr/bin/perl
$x='unob';
open OUT, ">$x.c";
print OUT <<HERE_DOC;
#include <stdio.h>
int main(void) { 
 FILE *f=fopen("$x.sh", "w");
 fprintf(f,"echo Hello world!\\n");
 fclose(f);
 system("chmod +x $x.sh");
 system("./$x.sh"); return 0; 
}
HERE_DOC
close OUT;
system("gcc $x.c -o $x && ./$x");

"As an experienced Perl monger," said one of the judges, "I
can instantly tell that this program spits out C source
code that spits out a shell script to print 'Hello
world!'.  But this code certainly does not qualify as
unobfuscated Perl -- I mean, most of it isn't even written
in Perl!"

He added, "Out of all of the entries, only two were
actually unobfuscated perl.  Everything else looked like
line noise -- or worse."

The second place winner, Mrs. Sea Pearl, submitted the
following code:

#!/usr/bin/perl
use strict;
# Do nothing, successfully
exit(0);

"I think everybody missed the entire point of this
contest," ranted one judge.  "Participants were supposed to
produce code that could actually be understood by somebody
other than a ten-year Perl veteran.  Instead, we get an
implementation of a Java Virtual Machine.  And a version of
the Linux kernel ported to Win32 Perl.  Sheesh!"

In response to the news, a rogue group of Perl hackers have
presented a plan to add a "use really_goddamn_strict"
pragma to the language that would enforce readability and
unobfuscation.  With this pragma in force, the Perl
compiler might say:

 Warning: Program contains zero comments.  You've probably
 never seen or used one before; they begin with a #
 symbol.  Please start using them or else a representative
 from the nearest Perl Mongers group will come to your
 house and beat you over the head with a cluestick.

 Warning: Program uses a cute trick at line 125 that might
 make sense in C.  But this isn't C!

 Warning: Code at line 412 indicates that programmer is an
 idiot. Please correct error between chair and monitor.

 Warning: While There's More Than One Way To Do It, your
 method at line 523 is particularly stupid.  Please try
 again.

 Warning: Write-only code detected between lines 612 and
 734. While this code is perfectly legal, you won't have
 any clue what it does in two weeks.  I recommend you start
 over.

 Warning: Code at line 1,024 is indistinguishable from line
 noise or the output of /dev/random

 Warning: Have you ever properly indented a piece of code
 in your entire life?  Evidently not.

 Warning: I think you can come up with a more descriptive
 variable name than "foo" at line 1,523.

 Warning: Programmer attempting to re-invent the wheel at
 line 2,231. There's a function that does the exact same
 thing on CPAN -- and it actually works.

 Warning: Perl tries to make the easy jobs easy without
 making the hard jobs impossible -- but your code at line
 5,123 is trying to make an easy job impossible.  

 Error: Programmer failed to include required string "All
 hail Larry Wall" within program.  Execution aborted due to
 compilation errors.

Of course, convincing programmers to actually use that
pragma is another matter.  "If somebody actually wanted to
write readable code, why would they use Perl?  Let 'em use
Python!" exclaimed one Usenet regular.  "So this pragma is
a waste of electrons, just like use strict and the -w
command line parameter."

-
Humorix:      Linux and Open Source(nontm) on a lighter note
Archive:      http://humbolt.nl.linux.org/lists/
Web site:     http://www.i-want-a-website.com/about-linux/

----- End forwarded message -----


From bwarsaw@beopen.com  Thu Sep 21 01:02:22 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 20 Sep 2000 20:02:22 -0400 (EDT)
Subject: [Python-Dev] forwarded message from noreply@sourceforge.net
Message-ID: <14793.20494.375237.320590@anthem.concentric.net>

--3QcuDQyffX
Content-Type: text/plain; charset=us-ascii
Content-Description: message body text
Content-Transfer-Encoding: 7bit


For those of you who may not have received this message, please be
aware that SourceForge will have scheduled downtime this Friday night
until Saturday morning.

-Barry


--3QcuDQyffX
Content-Type: message/rfc822
Content-Description: forwarded message
Content-Transfer-Encoding: 7bit

Received: by ns1.beopen.com (mbox bwarsaw)
 (with Cubic Circle's cucipop (v1.31 1998/05/13) Tue Sep 12 20:03:02 2000)
Return-Path: <noreply@sourceforge.net>
Received: from lists.sourceforge.net (mail1.sourceforge.net [198.186.203.35])
	by ns1.beopen.com (8.9.3/8.9.3) with ESMTP id UAA92362
	for <bwarsaw@beopen.com>; Tue, 12 Sep 2000 20:01:43 -0700 (PDT)
	(envelope-from noreply@sourceforge.net)
Received: from delerium.i.sourceforge.net (sourceforge.net [198.186.203.33])
	by lists.sourceforge.net (8.9.3/8.9.3) with ESMTP id TAA05396;
	Tue, 12 Sep 2000 19:59:01 -0700
Received: (from nobody@localhost)
	by delerium.i.sourceforge.net (8.9.3/8.9.3) id TAA07397;
	Tue, 12 Sep 2000 19:58:47 -0700
Message-Id: <200009130258.TAA07397@delerium.i.sourceforge.net>
From: noreply@sourceforge.net
To: noreply@sourceforge.net
Subject: SourceForge:  Important Site News
Date: Tue, 12 Sep 2000 19:58:47 -0700
X-From_: noreply@sourceforge.net Tue Sep 12 20:01:43 2000
X-Authentication-Warning: delerium.i.sourceforge.net: nobody set sender to noreply@sourceforge.net using -f

Dear SourceForge User,

As the Director of SourceForge, I want to thank you in making
SourceForge the most successful Open Source Development Site in the
World.  We just surpassed 60,000 registered users and 8,800 open
source projects.  

We have a lot of exciting things planned for SourceForge in the coming
months.  These include faster servers, improved connectivity, mirrored
servers, and the addition of more hardware platforms to our compile
farm (including Sparc, PowerPC, Alpha, and more).

Did I mention additional storage?  The new servers that we will be
adding to the site will increase the total storage on SourceForge by
an additional 6 terabytes.

In 10 days we will begin the first phase of our new hardware build
out. This phase involves moving the primary site to our new location
in Fremont, California.  This move will take place on Friday night
(Sept 22nd) at 10pm and continue to 8am Saturday morning (Pacific
Standard Time).  During this time the site will be off-line as we 
make the physical change.  

I know many of you use Sourceforge as your primary development
environment, so I want to apologize in advance for the inconvenience
of this downtime.   If you have any concerns about this, please feel
free to email me.

I will write you again as this date nears, with a reminder and an
update.

Thank you again for using SourceForge.net

--
Patrick McGovern
Director of SourceForge.net 
Pat@sourceforge.net


---------------------
This email was sent from sourceforge.net. To change your email receipt
preferences, please visit the site and edit your account via the
"Account Maintenance" link.

Direct any questions to admin@sourceforge.net, or reply to this email.



--3QcuDQyffX--

From tim_one@email.msn.com  Thu Sep 21 01:19:41 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 20 Sep 2000 20:19:41 -0400
Subject: [Python-Dev] forwarded message from noreply@sourceforge.net
In-Reply-To: <14793.20494.375237.320590@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com>

[Barry A. Warsaw]
> For those of you who may not have received this message, please be
> aware that SourceForge will have scheduled downtime this Friday night
> until Saturday morning.

... This move will take place on Friday night( Sept 22nd) at 10pm and
    continue to 8am Saturday morning (Pacific Standard Time).  During
    this time the site will be off-line as we make the physical change.

Looks to me like they started 30 hours early!  SF has been down more than up
all day, by my account.

So, for recreation in our idly desperate moments, let me recommend a quick
read, and especially to our friends at BeOpen, ActiveState and Secret Labs:

    http://linuxtoday.com/news_story.php3?ltsn=2000-09-20-006-21-OP-BZ-LF
    "Savor the Unmarketed Moment"
    "Marketers are drawn to money as surely as maggots were drawn
    to aforementioned raccoon ...
    The Bazaar is about to be blanketed with smog emitted by the
    Cathedral's smokestacks.  Nobody will be prevented from doing
    whatever he or she was doing before, but the oxygen level will
    be dropping and visibility will be impaired."

gasping-a-bit-from-the-branding-haze-himself<0.5-wink>-ly y'rs  - tim



From guido@beopen.com  Thu Sep 21 02:57:39 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 20 Sep 2000 20:57:39 -0500
Subject: [Python-Dev] SourceForge downtime postponed
In-Reply-To: Your message of "Wed, 20 Sep 2000 20:19:41 -0400."
 <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com>
Message-ID: <200009210157.UAA05881@cj20424-a.reston1.va.home.com>

> Looks to me like they started 30 hours early!  SF has been down more than up
> all day, by my account.

Actually, they're back in business, and they impoved the Bugs manager!
(E.g. there are now group management facilities on the fromt page.)

They also mailed around today that the move won't be until mid
October.  That's good, insofar that it doesn't take SF away from us
while we're in the heat of the 2nd beta release!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From guido@beopen.com  Thu Sep 21 03:17:20 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 20 Sep 2000 21:17:20 -0500
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: Your message of "Wed, 20 Sep 2000 16:22:44 +0200."
 <39C8C834.5E3B90E7@lemburg.com>
References: <39C8C834.5E3B90E7@lemburg.com>
Message-ID: <200009210217.VAA06180@cj20424-a.reston1.va.home.com>

> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries
> to continue as far as possible (much like make -k) ?
> 
> If yes, could the existing Python parser/compiler be reused for
> such a tool ?
> 
> I was asked to write a tool which checks Python code and returns
> a list of found errors (syntax error and possibly even some
> lint warnings) instead of stopping at the first error it finds.

I had some ideas for this in the context of CP4E, and I even tried to
implement some, but didn['t get far enough to check it in anywhere.
Then I lost track of the code in the BeOpen move.  (It wasn't very
much.)

I used a completely different approach to parsing: look at the code
from the outside in, e.g. when you see

  def foo(a,b,c):
      print a
      for i in range(b):
          while x:
              print v
      else:
          bah()

you first notice that there's a line starting with a 'def' keyword
followed by some indented stuff; then you notice that the indented
stuff is a line starting with 'print', a line starting with 'for'
followed by more indented stuff, and a line starting with 'else' and
more indented stuff; etc.

This requires tokenization to succeed -- you need to know what are
continuation lines, and what are strings and comments, before you can
parse the rest; but I believe it can be made successful in the light
of quite severe problems.

(No time to elaborate. :-( )

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Thu Sep 21 11:32:23 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:32:23 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <39C9E3B7.5F9BFC01@lemburg.com>

Martin von Loewis wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries to
> > continue as far as possible (much like make -k) ?
> 
> In "Compilerbau", this is referred to as "Fehlerstabilisierung". I
> suggest to have a look at the dragon book (Aho, Seti, Ullman).
> 
> The common approch is to insert or remove tokens, using some
> heuristics. In YACC, it is possible to add error productions to the
> grammar. Whenever an error occurs, the parser assigns all tokens to
> the "error" non-terminal until it concludes that it can perform a
> reduce action.
> 
> A similar approach might work for the Python Grammar. For each
> production, you'd define a set of stabilization tokens. If these are
> encountered, then the rule would be considered complete. Everything is
> consumed until a stabilization token is found.
> 
> For example, all expressions could be stabilized with a
> keyword. I.e. if you encounter a syntax error inside an expression,
> you ignore all tokens until you see 'print', 'def', 'while', etc.
> 
> In some cases, it may be better to add input rather than removing
> it. For example, if you get an "inconsistent dedent" error, you could
> assume that this really was a consistent dedent, or you could assume
> it was not meant as a dedent at all. Likewise, if you get a
> single-quote start-of-string, with no single-quote until end-of-line,
> you just should assume there was one.
> 
> Adding error productions to ignore input until stabilization may be
> feasible on top of the existing parser. Adding tokens in the right
> place is probably harder - I'd personally go for a pure Python
> solution, that operates on Grammar/Grammar.

I think I'd prefer a Python solution too -- perhaps I could
start out with tokenizer.py and muddle along that way. pylint
from Aaron Waters should also provide some inspiration.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From mal@lemburg.com  Thu Sep 21 11:42:46 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:42:46 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <39C8C834.5E3B90E7@lemburg.com> <200009210217.VAA06180@cj20424-a.reston1.va.home.com>
Message-ID: <39C9E626.6CF85658@lemburg.com>

Guido van Rossum wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries
> > to continue as far as possible (much like make -k) ?
> >
> > If yes, could the existing Python parser/compiler be reused for
> > such a tool ?
> >
> > I was asked to write a tool which checks Python code and returns
> > a list of found errors (syntax error and possibly even some
> > lint warnings) instead of stopping at the first error it finds.
> 
> I had some ideas for this in the context of CP4E, and I even tried to
> implement some, but didn['t get far enough to check it in anywhere.
> Then I lost track of the code in the BeOpen move.  (It wasn't very
> much.)
> 
> I used a completely different approach to parsing: look at the code
> from the outside in, e.g. when you see
> 
>   def foo(a,b,c):
>       print a
>       for i in range(b):
>           while x:
>               print v
>       else:
>           bah()
> 
> you first notice that there's a line starting with a 'def' keyword
> followed by some indented stuff; then you notice that the indented
> stuff is a line starting with 'print', a line starting with 'for'
> followed by more indented stuff, and a line starting with 'else' and
> more indented stuff; etc.

This is similar to my initial idea: syntax checking should continue
(or possibly restart) at the next found "block" after an error.

E.g. in Thomas' case:

if 1:

     doodle()

    forever()
    and_ever()
    <tons more code using 4-space indent>

the checker should continue at forever() possibly by restarting
checking at that line.

> This requires tokenization to succeed -- you need to know what are
> continuation lines, and what are strings and comments, before you can
> parse the rest; but I believe it can be made successful in the light
> of quite severe problems.

Looks like this is highly non-trivial job...

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From mal@lemburg.com  Thu Sep 21 11:58:57 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:58:57 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>
Message-ID: <39C9E9F1.81C50A35@lemburg.com>

"Martin v. Loewis" wrote:
> 
> I just tried to disable the getreadbufferproc on Unicode objects. Most
> of the test suite continues to work.

Martin, haven't you read my last post to Guido ? 

Completely disabling getreadbuf is not a solution worth considering --
it breaks far too much code which the test suite doesn't even test,
e.g. MarkH's win32 stuff produces tons of Unicode object which
then can get passed to potentially all of the stdlib. The test suite
doesn't check these cases.
 
Here's another possible solution to the problem:

    Special case Unicode in getargs.c's code for "s#" only and leave
    getreadbuf enabled. "s#" could then return the default encoded
    value for the Unicode object while SRE et al. could still use 
    PyObject_AsReadBuffer() to get at the raw data.

> test_unicode fails, which is caused by "s#" not working anymore when
> in readbuffer_encode when testing the unicode_internal encoding. That
> could be fixed (*).

True. It currently relies on the fact the "s#" returns the internal
raw data representation for Unicode.
 
> More concerning, sre fails when matching a unicode string. sre uses
> the getreadbufferproc to get to the internal representation. If it has
> sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
> buffer (?!?).
> 
> I'm not sure what the right solution would be in this case: I *think*
> sre should have more specific knowledge of Unicode objects, so it
> should support objects with a buffer interface representing a 1-byte
> character string, or Unicode objects. Actually, is there anything
> wrong with sre operating on string and unicode objects only? It
> requires that the buffer has a single segment, anyway...

Ouch... but then again, it's a (documented ?) feature of re and
sre that they work on getreadbuf compatible objects, e.g.
mmap'ed files, so they'll have to use "s#" for accessing the
data.

Of course, with the above solution, SRE could use the 
PyObject_AsReadBuffer() API to get at the binary data.
 
> Regards,
> Martin
> 
> (*) The 'internal encoding' function should directly get to the
> representation of the unicode object, and readbuffer_encode could
> become Python:
> 
> def readbuffer_encode(o,errors="strict"):
>   b = buffer(o)
>   return str(b),len(b)
> 
> or be removed altogether, as it would (rightfully) stop working on
> unicode objects.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From jeremy@beopen.com  Thu Sep 21 15:58:54 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 21 Sep 2000 10:58:54 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/xml/sax __init__.py,1.6,1.7
In-Reply-To: <200009211447.HAA02917@slayer.i.sourceforge.net>
References: <200009211447.HAA02917@slayer.i.sourceforge.net>
Message-ID: <14794.8750.83880.932497@bitdiddle.concentric.net>

Lars,

I just fixed the last set of checkins you made to the xml package.
You left the system in a state where test_minidom failed.  When part
of the regression test fails, it causes severe problems for all other
developers.  They have no way to know if the change they've just made
to the tuple object (for example) causes the failure or not.  Thus, it
is essential that the CVS repository never be in a state where the
regression tests fail.

You're kind of new around here, so I'll let you off with a warning
<wink>.

Jeremy

From martin@loewis.home.cs.tu-berlin.de  Thu Sep 21 17:19:53 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 21 Sep 2000 18:19:53 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
In-Reply-To: <39C9E9F1.81C50A35@lemburg.com> (mal@lemburg.com)
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>
Message-ID: <200009211619.SAA00737@loewis.home.cs.tu-berlin.de>

> Martin, haven't you read my last post to Guido ? 

I've read

http://www.python.org/pipermail/python-dev/2000-September/016162.html

where you express a preference of disabling the getreadbuf slot, in
addition to special-casing Unicode objects in s#. I've just tested the
effects of your solution 1 on the test suite. Or are you referring to
a different message?

> Completely disabling getreadbuf is not a solution worth considering --
> it breaks far too much code which the test suite doesn't even test,
> e.g. MarkH's win32 stuff produces tons of Unicode object which
> then can get passed to potentially all of the stdlib. The test suite
> doesn't check these cases.

Do you have any specific examples of what else would break? Looking at
all occurences of 's#' in the standard library, I can't find a single
case where the current behaviour would be right - in all cases raising
an exception would be better. Again, any counter-examples?

>     Special case Unicode in getargs.c's code for "s#" only and leave
>     getreadbuf enabled. "s#" could then return the default encoded
>     value for the Unicode object while SRE et al. could still use 
>     PyObject_AsReadBuffer() to get at the raw data.

I think your option 2 is acceptable, although I feel the option 1
would expose more potential problems. What if an application
unknowingly passes a unicode object to md5.update? In testing, it may
always succeed as ASCII-only data is used, and it will suddenly start
breaking when non-ASCII strings are entered by some user. 

Using the internal rep would also be wrong in this case - the md5 hash
would depend on the byte order, which is probably not desired (*).

In any case, your option 2 would be a big improvement over the current
state, so I'll just shut up.

Regards,
Martin

(*) BTW, is there a meaningful way to define md5 for a Unicode string?

From DavidA@ActiveState.com  Thu Sep 21 17:32:30 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Thu, 21 Sep 2000 09:32:30 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Unobfuscated Perl Code Contest
Message-ID: <Pine.WNT.4.21.0009210931540.1868-100000@loom>

ObPython at the end...

---da

Unobfuscated Perl Code Contest
September 16, 19100

The Perl Gazette has announced the winners in the First
Annual _Un_obfuscated Perl Code Contest.  First place went
to Edwin Fuller, who submitted this unobfuscated program:

#!/usr/bin/perl
print "Hello world!\n";

"This was definitely a challenging contest," said an
ecstatic Edwin Fuller. "I've never written a Perl program
before that didn't have hundreds of qw( $ @ % & * | ? / \ !
# ~ ) symbols.  I really had to summon all of my
programming skills to produce an unobfuscated program."

The judges in the contest learned that many programmers
don't understand the meaning of 'unobfuscated perl'.  For
instance, one participant sent in this 'Hello world!'
program:

#!/usr/bin/perl
$x='unob';
open OUT, ">$x.c";
print OUT <<HERE_DOC;
#include <stdio.h>
int main(void) { 
 FILE *f=fopen("$x.sh", "w");
 fprintf(f,"echo Hello world!\\n");
 fclose(f);
 system("chmod +x $x.sh");
 system("./$x.sh"); return 0; 
}
HERE_DOC
close OUT;
system("gcc $x.c -o $x && ./$x");

"As an experienced Perl monger," said one of the judges, "I
can instantly tell that this program spits out C source
code that spits out a shell script to print 'Hello
world!'.  But this code certainly does not qualify as
unobfuscated Perl -- I mean, most of it isn't even written
in Perl!"

He added, "Out of all of the entries, only two were
actually unobfuscated perl.  Everything else looked like
line noise -- or worse."

The second place winner, Mrs. Sea Pearl, submitted the
following code:

#!/usr/bin/perl
use strict;
# Do nothing, successfully
exit(0);

"I think everybody missed the entire point of this
contest," ranted one judge.  "Participants were supposed to
produce code that could actually be understood by somebody
other than a ten-year Perl veteran.  Instead, we get an
implementation of a Java Virtual Machine.  And a version of
the Linux kernel ported to Win32 Perl.  Sheesh!"

In response to the news, a rogue group of Perl hackers have
presented a plan to add a "use really_goddamn_strict"
pragma to the language that would enforce readability and
unobfuscation.  With this pragma in force, the Perl
compiler might say:

 Warning: Program contains zero comments.  You've probably
 never seen or used one before; they begin with a #
 symbol.  Please start using them or else a representative
 from the nearest Perl Mongers group will come to your
 house and beat you over the head with a cluestick.

 Warning: Program uses a cute trick at line 125 that might
 make sense in C.  But this isn't C!

 Warning: Code at line 412 indicates that programmer is an
 idiot. Please correct error between chair and monitor.

 Warning: While There's More Than One Way To Do It, your
 method at line 523 is particularly stupid.  Please try
 again.

 Warning: Write-only code detected between lines 612 and
 734. While this code is perfectly legal, you won't have
 any clue what it does in two weeks.  I recommend you start
 over.

 Warning: Code at line 1,024 is indistinguishable from line
 noise or the output of /dev/random

 Warning: Have you ever properly indented a piece of code
 in your entire life?  Evidently not.

 Warning: I think you can come up with a more descriptive
 variable name than "foo" at line 1,523.

 Warning: Programmer attempting to re-invent the wheel at
 line 2,231. There's a function that does the exact same
 thing on CPAN -- and it actually works.

 Warning: Perl tries to make the easy jobs easy without
 making the hard jobs impossible -- but your code at line
 5,123 is trying to make an easy job impossible.  

 Error: Programmer failed to include required string "All
 hail Larry Wall" within program.  Execution aborted due to
 compilation errors.

Of course, convincing programmers to actually use that
pragma is another matter.  "If somebody actually wanted to
write readable code, why would they use Perl?  Let 'em use
Python!" exclaimed one Usenet regular.  "So this pragma is
a waste of electrons, just like use strict and the -w
command line parameter."



From guido@beopen.com  Thu Sep 21 18:44:25 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 21 Sep 2000 12:44:25 -0500
Subject: [Python-Dev] Disabling Unicode readbuffer interface
In-Reply-To: Your message of "Thu, 21 Sep 2000 18:19:53 +0200."
 <200009211619.SAA00737@loewis.home.cs.tu-berlin.de>
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>
 <200009211619.SAA00737@loewis.home.cs.tu-berlin.de>
Message-ID: <200009211744.MAA17168@cj20424-a.reston1.va.home.com>

I haven't researched this to the bottom, but based on the email
exchange, it seems that keeping getreadbuf and special-casing s# for
Unicode objects makes the most sense.  That makes the 's' and 's#'
more similar.  Note that 'z#' should also be fixed.

I believe that SRE uses PyObject_AsReadBuffer() so that it can work
with arrays of shorts as well (when shorts are two chars).  Kind of
cute.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

From mal@lemburg.com  Thu Sep 21 18:16:17 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 19:16:17 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>
 <200009211619.SAA00737@loewis.home.cs.tu-berlin.de> <200009211744.MAA17168@cj20424-a.reston1.va.home.com>
Message-ID: <39CA4261.2B586B3F@lemburg.com>

Guido van Rossum wrote:
> 
> I haven't researched this to the bottom, but based on the email
> exchange, it seems that keeping getreadbuf and special-casing s# for
> Unicode objects makes the most sense.  That makes the 's' and 's#'
> more similar.  Note that 'z#' should also be fixed.
> 
> I believe that SRE uses PyObject_AsReadBuffer() so that it can work
> with arrays of shorts as well (when shorts are two chars).  Kind of
> cute.

Ok, I'll check in a patch for special casing Unicode object
in getarg.c's "s#" later today.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From mal@lemburg.com  Thu Sep 21 22:28:47 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 23:28:47 +0200
Subject: [Python-Dev] Versioning for Python packages
References: <200009192300.RAA01451@localhost.localdomain> <39C87B69.DD0D2DC9@lemburg.com> <200009201507.KAA04851@cj20424-a.reston1.va.home.com>
 <39C8CEB5.65A70BBE@lemburg.com> <200009211538.KAA08180@cj20424-a.reston1.va.home.com>
Message-ID: <39CA7D8F.633E74D6@lemburg.com>

[Moved to python-dev from xml-sig]

Guido van Rossum wrote:
> 
> > Perhaps a good start would be using lib/python-2.0.0 as installation
> > target rather than just lib/python2. I'm sure this was discussed
> > before, but given the problems we had with this during the 1.5
> > cycle (with 1.5.2 providing not only patches, but also new
> > features), I think a more fine-grained approach should be
> > considered for future versions.
> 
> We're using lib/python2.0, and we plan not to make major releases with
> a 3rd level version number increment!  SO I think that's not necessary.

Ah, that's good news :-)
 
> > About package versioning: how would the version be specified
> > in imports ?
> >
> > from mx.DateTime(1.4.0) import now
> > from mx(1.0.0).DateTime import now
> > from mx(1.0.0).DateTime(1.4.0) import now
> >
> > The directory layout would then look something like this:
> >
> > mx/
> >       1.0.0/
> >               DateTime/
> >                       1.4.0/
> >
> > Package __path__ hooks could be used to implement the
> > lookup... or of course some new importer.
> >
> > But what happens if there is no (old) version mx-1.0.0 installed ?
> > Should Python then default to mx-1.3.0 which is installed or
> > raise an ImportError ?
> >
> > This sounds like trouble... ;-)
> 
> You've got it.  Please move this to python-dev.  It's good PEP
> material!

Done.
 
> > > > We will have a similar problem with Unicode and the stdlib
> > > > during the Python 2.0 cycle: people will want to use Unicode
> > > > together with the stdlib, yet many modules in the stdlib
> > > > don't support Unicode. To remedy this, users will have to
> > > > patch the stdlib modules and put them somewhere so that they
> > > > can override the original 2.0 ones.
> > >
> > > They can use $PYTHONPATH.
> >
> > True, but why not help them a little by letting site
> > installations override the stdlib ? After all, distutils
> > standard target is site-packages.
> 
> Overrides of the stdlib are dangerous in general and should not be
> encouraged.
> 
> > > > BTW, with distutils coming on strong I don't really see a
> > > > need for any hacks: instead distutils should be given some
> > > > smart logic to do the right thing, ie. it should support
> > > > installing subpackages of a package. If that's not desired,
> > > > then I'd opt for overriding the whole package (without any
> > > > hacks to import the overridden one).
> > >
> > > That's another possibility.  But then distutils will have to become
> > > aware of package versions again.
> >
> > This shouldn't be hard to add to the distutils processing:
> > before starting an installation of a package, the package
> > pre-install hook could check which versions are installed
> > and then decide whether to raise an exception or continue.
> 
> Here's another half-baked idea about versions: perhaps packages could
> have a __version__.py file?

Hmm, I usually put a __version__ attribute right into the
__init__.py file of the package -- why another file ?

I think we should come up with a convention on these
meta-attributes. They are useful for normal modules
as well, e.g. __version__, __copyright__, __author__, etc.

Looks like its PEP-time again ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/

From jeremy@beopen.com  Fri Sep 22 21:29:18 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 22 Sep 2000 16:29:18 -0400 (EDT)
Subject: [Python-Dev] Sunday code freeze
Message-ID: <14795.49438.749774.32159@bitdiddle.concentric.net>

We will need about a day to prepare the 2.0b2 release.  Thus, all
changes need to be committed by the end of the day on Sunday.  A code
freeze will be in effect starting then.

Please try to resolve any patches or bugs assigned to you before the
code freeze.

Jeremy

From thomas@xs4all.net  Sat Sep 23 13:26:51 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 23 Sep 2000 14:26:51 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.19,1.20
In-Reply-To: <200009230440.VAA11540@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Fri, Sep 22, 2000 at 09:40:47PM -0700
References: <200009230440.VAA11540@slayer.i.sourceforge.net>
Message-ID: <20000923142651.A20757@xs4all.nl>

On Fri, Sep 22, 2000 at 09:40:47PM -0700, Fred L. Drake wrote:

> Modified Files:
> 	pep-0042.txt 
> Log Message:
> 
> Added request for a portable time.strptime() implementation.

As Tim noted, there already was a request for a separate implementation of
strptime(), though slightly differently worded. I've merged them.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!

From tim_one@email.msn.com  Sat Sep 23 21:44:27 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 23 Sep 2000 16:44:27 -0400
Subject: [Python-Dev] FW: Compiling Python 1.6 under MacOS X ...
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJLHHAA.tim_one@email.msn.com>

FYI.

-----Original Message-----
From: python-list-admin@python.org
[mailto:python-list-admin@python.org]On Behalf Of Thelonious Georgia
Sent: Saturday, September 23, 2000 4:05 PM
To: python-list@python.org
Subject: Compiling Python 1.6 under MacOS X ...


Hey all-

I'm trying to get the 1.6 sources to compile under the public beta of MacOS
X. I ran ./configure, then make, and it does a pretty noble job of
compiling, up until I get:

cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
unicodectyc
cc: Internal compiler error: program cpp-precomp got fatal signal 11make[1]:
*** [unicodectype.o] Error 1
make: *** [Objects] Error 2
[dhcppc4:~/Python-1.6] root#

cc -v returns:
Reading specs from /usr/libexec/ppc/2.95.2/specs
Apple Computer, Inc. version cc-796.3, based on gcc driver version 2.7.2.1
exec2

I have searched high and low, but can find no mention of this particular
error (which makes sense, sure, because of how long the beta has been out),
but any help in getting around this particular error would be appreciated.

Theo


--
http://www.python.org/mailman/listinfo/python-list



From tim_one@email.msn.com  Sun Sep 24 00:31:41 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 23 Sep 2000 19:31:41 -0400
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com>

Dan, anyone can mail to python-dev@python.org.

Everyone else, this appears to be a followup on the Mac OSX compiler error.

Dan, I replied to that on comp.lang.python; if you have bugs to report
(platform-specific or otherwise) against the current CVS tree, SourceForge
is the best place to do it.  Since the 1.6 release is history, it's too late
to change anything there.

-----Original Message-----
From: Dan Wolfe [mailto:dkwolfe@pacbell.net]
Sent: Saturday, September 23, 2000 5:35 PM
To: tim_one@email.msn.com
Subject: regarding the Python Developer posting...


Howdy Tim,

I can't send to the development list so your gonna have to suffer... ;-)

With regards to:

<http://www.python.org/pipermail/python-dev/2000-September/016188.html>

>cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
>unicodectyc
>cc: Internal compiler error: program cpp-precomp got fatal signal
11make[1]:
>*** [unicodectype.o] Error 1
>make: *** [Objects] Error 2
>dhcppc4:~/Python-1.6] root#

I believe it's a bug in the cpp pre-comp as it also appears under 2.0.
I've been able to work around it by passing -traditional-cpp to the
compiler and it doesn't complain... ;-)  I'll take it up with Stan Steb
(the compiler guy) when I go into work on Monday.

Now if I can just figure out the test_sre.py, I'll be happy. (eg it
compiles and runs but is still not passing all the regression tests).

- Dan



From gvwilson@nevex.com  Sun Sep 24 15:26:37 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Sun, 24 Sep 2000 10:26:37 -0400 (EDT)
Subject: [Python-Dev] serializing Python as XML
Message-ID: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>

Hi, everyone.  One of the Software Carpentry designers has asked whether a
package exists to serialize Python data structures as XML, so that lists
of dictionaries of tuples of etc. can be exchanged with other XML-aware
tools.  Does this exist, even in pre-release form?  If not, I'd like to
hear from anyone who's already done any thinking in this direction.

Thanks,
Greg

p.s. has there ever been discussion about adding an '__xml__' method to
Python to augment the '__repr__' and '__str__' methods?




From fdrake@beopen.com  Sun Sep 24 15:27:55 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Sun, 24 Sep 2000 10:27:55 -0400 (EDT)
Subject: [Python-Dev] serializing Python as XML
In-Reply-To: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
References: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
Message-ID: <14798.3947.965595.628569@cj42289-a.reston1.va.home.com>

Greg Wilson writes:
 > Hi, everyone.  One of the Software Carpentry designers has asked whether a
 > package exists to serialize Python data structures as XML, so that lists
 > of dictionaries of tuples of etc. can be exchanged with other XML-aware
 > tools.  Does this exist, even in pre-release form?  If not, I'd like to
 > hear from anyone who's already done any thinking in this direction.

  There are at least two implementations; I'm not sure of their exact
status.
  The PyXML contains something called xml.marshal, written by Andrew
Kuchling.  I've also seen something called Python xml_objectify (I
think) announced on Freshmeat.net.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gvwilson@nevex.com  Sun Sep 24 16:00:03 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Sun, 24 Sep 2000 11:00:03 -0400 (EDT)
Subject: [Python-Dev] installer difficulties
Message-ID: <Pine.LNX.4.10.10009241056300.14730-100000@akbar.nevex.com>

I just ran the "uninstall" that comes with BeOpen-Python-2.0b1.exe (the
September 8 version), then re-ran the installer.  A little dialog came up
saying "Corrupt installation detected", and the installer exits. Deleted
all of my g:\python2.0 files, all the registry entries, etc. --- same
behavior.

1. What is it looking at to determine whether the installation is corrupt?
   The installer itself, or my hard drive?  (If the former, my copy of the
   downloaded installer is 5,970,597 bytes long.)

2. What's the fix?

Thanks,
Greg




From skip@mojam.com (Skip Montanaro)  Sun Sep 24 16:19:10 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sun, 24 Sep 2000 10:19:10 -0500 (CDT)
Subject: [Python-Dev] serializing Python as XML
In-Reply-To: <14798.3947.965595.628569@cj42289-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
 <14798.3947.965595.628569@cj42289-a.reston1.va.home.com>
Message-ID: <14798.7022.727038.770709@beluga.mojam.com>

    >> Hi, everyone.  One of the Software Carpentry designers has asked
    >> whether a package exists to serialize Python data structures as XML,
    >> so that lists of dictionaries of tuples of etc. can be exchanged with
    >> other XML-aware tools.

    Fred> There are at least two implementations ... PyXML & xml_objectify 

You can also use XML-RPC (http://www.xmlrpc.com/) or SOAP
(http://www.develop.com/SOAP/).  In Fredrik Lundh's xmlrpclib library
(http://www.pythonware.com/products/xmlrpc/) you can access the dump and
load functions without actually using the rest of the protocol if you like.
I suspect there are similar hooks in soaplib
(http://www.pythonware.com/products/soap/).

-- 
Skip Montanaro (skip@mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/


From tim_one@email.msn.com  Sun Sep 24 18:55:15 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 13:55:15 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <Pine.LNX.4.10.10009241056300.14730-100000@akbar.nevex.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELDHHAA.tim_one@email.msn.com>

[posted & mailed]

[Greg Wilson]
> I just ran the "uninstall" that comes with BeOpen-Python-2.0b1.exe (the
> September 8 version), then re-ran the installer.  A little dialog came up
> saying "Corrupt installation detected", and the installer exits. Deleted
> all of my g:\python2.0 files, all the registry entries, etc. --- same
> behavior.
>
> 1. What is it looking at to determine whether the installation is
>    corrupt?

While I built the installer, I have no idea!  It's an internal function of
the Wise software, and-- you guessed it <wink> --that's closed-source.  I
*believe* it's failing an internal consistency check, and that's all.

>    The installer itself, or my hard drive?  (If the former, my copy
>    of the downloaded installer is 5,970,597 bytes long.)

That is the correct size.

> 2. What's the fix?

Dunno.  It's a new one on me, and I uninstall and reinstall many times each
week.  Related things occasionally pop up on Python-Help, and is usually
fixed there by asking the victim to try downloading again with some other
program (Netscape instead of IE, or vice versa, or FTP, or GetRight, ...).

Here's a better check, provided you have *some* version of Python sitting
around:

>>> path = "/updates/BeOpen-Python-2.0b1.exe" # change accordingly
>>> import os
>>> os.path.getsize(path)
5970597
>>> guts = open(path, "rb").read()
>>> len(guts)
5970597
>>> import sha
>>> print sha.new(guts).hexdigest()
ef495d351a93d887f5df6b399747d4e96388b0d5
>>>

If you don't get the same SHA digest, it is indeed corrupt despite having
the correct size.  Let us know!




From martin@loewis.home.cs.tu-berlin.de  Sun Sep 24 18:56:04 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 24 Sep 2000 19:56:04 +0200
Subject: [Python-Dev] serializing Python as XML
Message-ID: <200009241756.TAA00735@loewis.home.cs.tu-berlin.de>

> whether a package exists to serialize Python data structures as XML,

Zope has a variant of pickle where pickles follow an XML DTD (i.e. it
pickles into XML). I believe the current implementation first pickles
into an ASCII pickle and reformats that as XML afterwards, but that is
an implementation issue.

> so that lists of dictionaries of tuples of etc. can be exchanged
> with other XML-aware tools.

See, this is one of the common XML pitfalls. Even though the output of
that is well-formed XML, and even though there is an imaginary DTD (*)
which this XML could be validated against: it is still unlikely that
other XML-aware tools could make much use of the format, at least if
the original Python contained some "interesting" objects
(e.g. instance objects). Even with only dictionaries of tuples: The
Zope DTD supports cyclic structures; it would not be straight-forward
to support the back-referencing in structure in some other tool
(although certainly possible).

XML alone does not give interoperability. You need some agreed-upon
DTD for that. If that other XML-aware tool is willing to adopt to a
Python-provided DTD - why couldn't it read Python pickles in the first
place?

Regards,
Martin

(*) There have been repeated promises of actually writing down the DTD
some day.


From tim_one@email.msn.com  Sun Sep 24 19:47:11 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 14:47:11 -0400
Subject: [Python-Dev] How about braindead Unicode "compression"?
Message-ID: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>

unicodedatabase.c has 64K lines of the form:

/* U+009a */ { 13, 0, 15, 0, 0 },

Each struct getting initialized there takes 8 bytes on most machines (4
unsigned chars + a char*).

However, there are only 3,567 unique structs (54,919 of them are all 0's!).
So a braindead-easy mechanical "compression" scheme would simply be to
create one vector with the 3,567 unique structs, and replace the 64K record
constructors with 2-byte indices into that vector.  Data size goes down from

    64K * 8b = 512Kb

to

    3567 * 8b + 64K * 2b ~= 156Kb

at once; the source-code transformation is easy to do via a Python program;
the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
can go away; and one indirection is added to access (which remains utterly
uniform).

Previous objections to compression were, as far as I could tell, based on
fear of elaborate schemes that rendered the code unreadable and the access
code excruciating.  But if we can get more than a factor of 3 with little
work and one new uniform indirection, do people still object?

If nobody objects by the end of today, I intend to do it.




From tim_one@email.msn.com  Sun Sep 24 21:26:40 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 16:26:40 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELDHHAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com>

[Tim]
> ...
> Here's a better check, provided you have *some* version of Python sitting
> around:
>
> >>> path = "/updates/BeOpen-Python-2.0b1.exe" # change accordingly
> >>> import os
> >>> os.path.getsize(path)
> 5970597
> >>> guts = open(path, "rb").read()
> >>> len(guts)
> 5970597
> >>> import sha
> >>> print sha.new(guts).hexdigest()
> ef495d351a93d887f5df6b399747d4e96388b0d5
> >>>
>
> If you don't get the same SHA digest, it is indeed corrupt despite having
> the correct size.  Let us know!

Greg reports getting

  e65aac55368b823e1c0bc30c0a5bc4dd2da2adb4

Someone else care to try this?  I tried it both on the original installer I
uploaded to BeOpen, and on the copy I downloaded back from the pythonlabs
download page right after Fred updated it.  At this point I don't know
whether BeOpen's disk is corrupted, or Greg's, or sha has a bug, or ...




From guido@beopen.com  Sun Sep 24 22:47:52 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 16:47:52 -0500
Subject: [Python-Dev] How about braindead Unicode "compression"?
In-Reply-To: Your message of "Sun, 24 Sep 2000 14:47:11 -0400."
 <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>
Message-ID: <200009242147.QAA06557@cj20424-a.reston1.va.home.com>

> unicodedatabase.c has 64K lines of the form:
> 
> /* U+009a */ { 13, 0, 15, 0, 0 },
> 
> Each struct getting initialized there takes 8 bytes on most machines (4
> unsigned chars + a char*).
> 
> However, there are only 3,567 unique structs (54,919 of them are all 0's!).
> So a braindead-easy mechanical "compression" scheme would simply be to
> create one vector with the 3,567 unique structs, and replace the 64K record
> constructors with 2-byte indices into that vector.  Data size goes down from
> 
>     64K * 8b = 512Kb
> 
> to
> 
>     3567 * 8b + 64K * 2b ~= 156Kb
> 
> at once; the source-code transformation is easy to do via a Python program;
> the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
> can go away; and one indirection is added to access (which remains utterly
> uniform).
> 
> Previous objections to compression were, as far as I could tell, based on
> fear of elaborate schemes that rendered the code unreadable and the access
> code excruciating.  But if we can get more than a factor of 3 with little
> work and one new uniform indirection, do people still object?
> 
> If nobody objects by the end of today, I intend to do it.

Go for it!  I recall seeing that file and thinking the same thing.

(Isn't the VC++ compiler warning about line numbers > 64K?  Then you'd
have to put two pointers on one line to make it go away, regardless of
the size of the generated object code.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Sun Sep 24 22:58:53 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 16:58:53 -0500
Subject: [Python-Dev] installer difficulties
In-Reply-To: Your message of "Sun, 24 Sep 2000 16:26:40 -0400."
 <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com>
Message-ID: <200009242158.QAA06679@cj20424-a.reston1.va.home.com>

>   e65aac55368b823e1c0bc30c0a5bc4dd2da2adb4
> 
> Someone else care to try this?  I tried it both on the original installer I
> uploaded to BeOpen, and on the copy I downloaded back from the pythonlabs
> download page right after Fred updated it.  At this point I don't know
> whether BeOpen's disk is corrupted, or Greg's, or sha has a bug, or ...

I just downloaded it again and tried your code, and got the same value
as Greg!  I also get Greg's error on Windows with the newly downloaded
version.

Conclusion: the new Zope-ified site layout has a corrupt file.

I'll try to get in touch with the BeOpen web developers right away!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Sun Sep 24 22:20:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sun, 24 Sep 2000 23:20:06 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>
Message-ID: <39CE7006.D60A603D@lemburg.com>

Tim Peters wrote:
> 
> unicodedatabase.c has 64K lines of the form:
> 
> /* U+009a */ { 13, 0, 15, 0, 0 },
> 
> Each struct getting initialized there takes 8 bytes on most machines (4
> unsigned chars + a char*).
> 
> However, there are only 3,567 unique structs (54,919 of them are all 0's!).

That's because there are only around 11k definitions in the
Unicode database -- most of the rest is divided into private,
user defined and surrogate high/low byte reserved ranges.

> So a braindead-easy mechanical "compression" scheme would simply be to
> create one vector with the 3,567 unique structs, and replace the 64K record
> constructors with 2-byte indices into that vector.  Data size goes down from
> 
>     64K * 8b = 512Kb
> 
> to
> 
>     3567 * 8b + 64K * 2b ~= 156Kb
> 
> at once; the source-code transformation is easy to do via a Python program;
> the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
> can go away; and one indirection is added to access (which remains utterly
> uniform).
> 
> Previous objections to compression were, as far as I could tell, based on
> fear of elaborate schemes that rendered the code unreadable and the access
> code excruciating.  But if we can get more than a factor of 3 with little
> work and one new uniform indirection, do people still object?

Oh, there was no fear about making the code unreadable...
Christian and Fredrik were both working on smart schemes.
My only objection about these was missing documentation
and generation tools -- vast tables of completely random
looking byte data are unreadable ;-)
 
> If nobody objects by the end of today, I intend to do it.

+1 from here.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Sun Sep 24 22:25:34 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 17:25:34 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <200009242158.QAA06679@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com>

[Guido]
> I just downloaded it again and tried your code, and got the same value
> as Greg!  I also get Greg's error on Windows with the newly downloaded
> version.
>
> Conclusion: the new Zope-ified site layout has a corrupt file.
>
> I'll try to get in touch with the BeOpen web developers right away!

Thanks!  In the meantime, I pointed Greg to anonymous FTP at
python.beopen.com, in directory /pub/tmp/.  That's where I orginally
uploaded the installer, and I doubt our webmasters have had a chance to
corrupt it yet <0.9 wink>.




From mal@lemburg.com  Sun Sep 24 22:28:29 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sun, 24 Sep 2000 23:28:29 +0200
Subject: [Python-Dev] FW: regarding the Python Developer posting...
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com>
Message-ID: <39CE71FD.8858B71D@lemburg.com>

Tim Peters wrote:
> 
> Dan, anyone can mail to python-dev@python.org.
> 
> Everyone else, this appears to be a followup on the Mac OSX compiler error.
> 
> Dan, I replied to that on comp.lang.python; if you have bugs to report
> (platform-specific or otherwise) against the current CVS tree, SourceForge
> is the best place to do it.  Since the 1.6 release is history, it's too late
> to change anything there.
> 
> -----Original Message-----
> From: Dan Wolfe [mailto:dkwolfe@pacbell.net]
> Sent: Saturday, September 23, 2000 5:35 PM
> To: tim_one@email.msn.com
> Subject: regarding the Python Developer posting...
> 
> Howdy Tim,
> 
> I can't send to the development list so your gonna have to suffer... ;-)
> 
> With regards to:
> 
> <http://www.python.org/pipermail/python-dev/2000-September/016188.html>
> 
> >cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
> >unicodectyc
> >cc: Internal compiler error: program cpp-precomp got fatal signal
> 11make[1]:
> >*** [unicodectype.o] Error 1
> >make: *** [Objects] Error 2
> >dhcppc4:~/Python-1.6] root#
> 
> I believe it's a bug in the cpp pre-comp as it also appears under 2.0.
> I've been able to work around it by passing -traditional-cpp to the
> compiler and it doesn't complain... ;-)  I'll take it up with Stan Steb
> (the compiler guy) when I go into work on Monday.

You could try to enable the macro at the top of unicodectype.c:
 
#if defined(macintosh) || defined(MS_WIN64)
/*XXX This was required to avoid a compiler error for an early Win64
 * cross-compiler that was used for the port to Win64. When the platform is
 * released the MS_WIN64 inclusion here should no longer be necessary.
 */
/* This probably needs to be defined for some other compilers too. It breaks the
** 5000-label switch statement up into switches with around 1000 cases each.
*/
#define BREAK_SWITCH_UP return 1; } switch (ch) {
#else
#define BREAK_SWITCH_UP /* nothing */
#endif

If it does compile with the work-around enabled, please
give us a set of defines which identify the compiler and
platform so we can enable it per default for your setup.

> Now if I can just figure out the test_sre.py, I'll be happy. (eg it
> compiles and runs but is still not passing all the regression tests).

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@beopen.com  Sun Sep 24 23:34:28 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 17:34:28 -0500
Subject: [Python-Dev] installer difficulties
In-Reply-To: Your message of "Sun, 24 Sep 2000 17:25:34 -0400."
 <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com>
Message-ID: <200009242234.RAA06931@cj20424-a.reston1.va.home.com>

> Thanks!  In the meantime, I pointed Greg to anonymous FTP at
> python.beopen.com, in directory /pub/tmp/.  That's where I orginally
> uploaded the installer, and I doubt our webmasters have had a chance to
> corrupt it yet <0.9 wink>.

Other readers of this forum may find that there is other cruft there
that may appear useful; however I believe the files found there may
not be the correct versions either.

BTW, the source tarball on the new pythonlabs.com site is also
corrupt; the docs are bad links; I suspect that the RPMs are also
corrupt.  What an embarrassment.  (We proofread all the webpages but
never thought of testing the downloads!)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From tim_one@email.msn.com  Sun Sep 24 22:39:49 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 17:39:49 -0400
Subject: [Python-Dev] How about braindead Unicode "compression"?
In-Reply-To: <39CE7006.D60A603D@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>

[Tim]
>> Previous objections to compression were, as far as I could
>> tell, based on fear of elaborate schemes that rendered the code
>> unreadable and the access code excruciating.  But if we can get
>> more than a factor of 3 with little work and one new uniform
>> indirection, do people still object?

[M.-A. Lemburg]
> Oh, there was no fear about making the code unreadable...
> Christian and Fredrik were both working on smart schemes.
> My only objection about these was missing documentation
> and generation tools -- vast tables of completely random
> looking byte data are unreadable ;-)

OK, you weren't afraid of making the code unreadable, but you did object to
making it unreadable.  Got it <wink>.  My own view is that the C data table
source code "should be" generated by a straightforward Python program
chewing over the unicode.org data files.  But since that's the correct view,
I'm sure it's yours too.

>> If nobody objects by the end of today, I intend to do it.

> +1 from here.

/F and I talked about it offline.  We'll do *something* before the day is
done, and I suspect everyone will be happy.  Waiting for a superb scheme has
thus far stopped us from making any improvements at all, and at this late
point a Big Crude Yet Delicate Hammer is looking mighty attractive.

petitely y'rs  - tim




From Fredrik Lundh" <effbot@telia.com  Sun Sep 24 23:01:06 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 00:01:06 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>
Message-ID: <008f01c02672$f3f1a100$766940d5@hagrid>

tim wrote:
> /F and I talked about it offline.  We'll do *something* before the day is
> done, and I suspect everyone will be happy.

Okay, I just went ahead and checked in a new version of the
unicodedata stuff, based on my earlier unidb work.

On windows, the new unicodedata PYD is 120k (down from 600k),
and the source distribution should be about 2 megabytes smaller
than before (!).

If you're on a non-windows platform, please try out the new code
as soon as possible.  You need to check out:

        Modules/unicodedata.c
        Modules/unicodedatabase.c
        Modules/unicodedatabase.h
        Modules/unicodedata_db.h (new file)

Let me know if there are any build problems.

I'll check in the code generator script as soon as I've figured out
where to put it...  (how about Tools/unicode?)

</F>



From mal@lemburg.com  Mon Sep 25 08:57:36 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 09:57:36 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>
Message-ID: <39CF0570.FDDCF03C@lemburg.com>

Tim Peters wrote:
> 
> [Tim]
> >> Previous objections to compression were, as far as I could
> >> tell, based on fear of elaborate schemes that rendered the code
> >> unreadable and the access code excruciating.  But if we can get
> >> more than a factor of 3 with little work and one new uniform
> >> indirection, do people still object?
> 
> [M.-A. Lemburg]
> > Oh, there was no fear about making the code unreadable...
> > Christian and Fredrik were both working on smart schemes.
> > My only objection about these was missing documentation
> > and generation tools -- vast tables of completely random
> > looking byte data are unreadable ;-)
> 
> OK, you weren't afraid of making the code unreadable, but you did object to
> making it unreadable.  Got it <wink>. 

Ah yes, the old coffee syndrom again (or maybe just the jet-lag
watching Olympia in the very early morning hours).

What I meant was that I consider checking in unreadable
binary goop *without* documentation and generation tools
not a good idea. Now that Fredrik checked in the generators
as well, everything is fine.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Mon Sep 25 14:56:17 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 15:56:17 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules posixmodule.c,2.173,2.174
In-Reply-To: <200009251322.GAA21574@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Mon, Sep 25, 2000 at 06:22:04AM -0700
References: <200009251322.GAA21574@slayer.i.sourceforge.net>
Message-ID: <20000925155616.H20757@xs4all.nl>

On Mon, Sep 25, 2000 at 06:22:04AM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src/Modules
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv21486
> 
> Modified Files:
> 	posixmodule.c 
> Log Message:
> Add missing prototypes for the benefit of SunOS 4.1.4 */

These should go in pyport.h ! Unless you have some reason not to export them
to other file, but in that case we need to take a good look at the whole
pyport.h thing.

> + #if defined(sun) && !defined(__SVR4)
> + /* SunOS 4.1.4 doesn't have prototypes for these: */
> + extern int rename(const char *, const char *);
> + extern int pclose(FILE *);
> + extern int fclose(FILE *);
> + #endif
> + 


-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jim@interet.com  Mon Sep 25 14:55:56 2000
From: jim@interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 09:55:56 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <39CF596C.17BA4DC5@interet.com>

Martin von Loewis wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries to
> > continue as far as possible (much like make -k) ?
> 
> The common approch is to insert or remove tokens, using some
> heuristics. In YACC, it is possible to add error productions to the
> grammar. Whenever an error occurs, the parser assigns all tokens to
> the "error" non-terminal until it concludes that it can perform a
> reduce action.

The following is based on trying (a great learning experience)
to write a better Python lint.

There are IMHO two problems with the current Python
grammar file.  It is not possible to express operator
precedence, so deliberate shift/reduce conflicts are
used instead.  That makes the parse tree complicated
and non intuitive.  And there is no provision for error
productions.  YACC has both of these as built-in features.

I also found speed problems with tokenize.py.  AFAIK,
it only exists because tokenizer.c does not provide
comments as tokens, but eats them instead.  We could
modify tokenizer.c, then make tokenize.py be the
interface to the fast C tokenizer.  This eliminates the
problem of updating both too.

So how about re-writing the Python grammar in YACC in
order to use its more advanced features??  The simple
YACC grammar I wrote for 1.5.2 plus an altered tokenizer.c
parsed the whole Lib/*.py in a couple seconds vs. 30
seconds for the first file using Aaron Watters' Python
lint grammar written in Python.

JimA


From bwarsaw@beopen.com  Mon Sep 25 15:18:36 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 25 Sep 2000 10:18:36 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com>
Message-ID: <14799.24252.537090.326130@anthem.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim@interet.com> writes:

    JCA> So how about re-writing the Python grammar in YACC in
    JCA> order to use its more advanced features??  The simple
    JCA> YACC grammar I wrote for 1.5.2 plus an altered tokenizer.c
    JCA> parsed the whole Lib/*.py in a couple seconds vs. 30
    JCA> seconds for the first file using Aaron Watters' Python
    JCA> lint grammar written in Python.

I've been wanting to check out Antlr (www.antlr.org) because it gives
us the /possibility/ to use the same grammar files for both CPython
and JPython.  One problem though is that it generates Java and C++ so
we'd be accepting our first C++ into the core if we went this route.

-Barry


From gward@mems-exchange.org  Mon Sep 25 15:40:09 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 10:40:09 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39C8C834.5E3B90E7@lemburg.com>; from mal@lemburg.com on Wed, Sep 20, 2000 at 04:22:44PM +0200
References: <39C8C834.5E3B90E7@lemburg.com>
Message-ID: <20000925104009.A1747@ludwig.cnri.reston.va.us>

On 20 September 2000, M.-A. Lemburg said:
> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries
> to continue as far as possible (much like make -k) ?
> 
> If yes, could the existing Python parser/compiler be reused for
> such a tool ?

From what I understand of Python's parser and parser generator, no.
Recovering from errors is indeed highly non-trivial.  If you're really
interested, I'd look into Terence Parr's ANTLR -- it's a very fancy
parser generator that's waaay ahead of pgen (or lex/yacc, for that
matter).  ANTLR 2.x is highly Java-centric, and AFAIK doesn't yet have a
C backend (grumble) -- just C++ and Java.  (Oh wait, the antlr.org web
site says it can generate Sather too -- now there's an important
mainstream language!  ;-)

Tech notes: like pgen, ANTLR is LL; it generates a recursive-descent
parser.  Unlike pgen, ANTLR is LL(k) -- it can support arbitrary
lookahead, although k>2 can make parser generation expensive (not
parsing itself, just turning your grammar into code), as well as make
your language harder to understand.  (I have a theory that pgen's k=1
limitation has been a brick wall in the way of making Python's syntax
more complex, i.e. it's a *feature*!)

More importantly, ANTLR has good support for error recovery.  My BibTeX
parser has a lot of fun recovering from syntax errors, and (with a
little smoke 'n mirrors magic in the lexing stage) does a pretty good
job of it.  But you're right, it's *not* trivial to get this stuff
right.  And without support from the parser generator, I suspect you
would be in a world of hurtin'.

Disclaimer: I'm a programmer, not a computer scientist; it's been ages
since I read the Dragon Book, and I had to struggle with every paragraph
then; PCCTS 1.x (the precursor to ANTLR 2.x) is the only parser
generator I've used personally; and I've never written a parser for a
"real" language (although I can attest that BibTeX's lexical structure
was tricky enough!).

        Greg


From gward@mems-exchange.org  Mon Sep 25 15:43:10 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 10:43:10 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <14799.24252.537090.326130@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 25, 2000 at 10:18:36AM -0400
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net>
Message-ID: <20000925104310.B1747@ludwig.cnri.reston.va.us>

On 25 September 2000, Barry A. Warsaw said:
> I've been wanting to check out Antlr (www.antlr.org) because it gives
> us the /possibility/ to use the same grammar files for both CPython
> and JPython.  One problem though is that it generates Java and C++ so
> we'd be accepting our first C++ into the core if we went this route.

Or contribute a C back-end to ANTLR -- I've been toying with this idea
for, ummm, too damn long now.  Years.

        Greg


From jeremy@beopen.com  Mon Sep 25 15:50:30 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 10:50:30 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CF596C.17BA4DC5@interet.com>
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com>
Message-ID: <14799.26166.965015.344977@bitdiddle.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim@interet.com> writes:

  JCA> The following is based on trying (a great learning experience)
  JCA> to write a better Python lint.

  JCA> There are IMHO two problems with the current Python grammar
  JCA> file.  It is not possible to express operator precedence, so
  JCA> deliberate shift/reduce conflicts are used instead.  That makes
  JCA> the parse tree complicated and non intuitive.  And there is no
  JCA> provision for error productions.  YACC has both of these as
  JCA> built-in features.

  JCA> I also found speed problems with tokenize.py.  AFAIK, it only
  JCA> exists because tokenizer.c does not provide comments as tokens,
  JCA> but eats them instead.  We could modify tokenizer.c, then make
  JCA> tokenize.py be the interface to the fast C tokenizer.  This
  JCA> eliminates the problem of updating both too.

  JCA> So how about re-writing the Python grammar in YACC in order to
  JCA> use its more advanced features??  The simple YACC grammar I
  JCA> wrote for 1.5.2 plus an altered tokenizer.c parsed the whole
  JCA> Lib/*.py in a couple seconds vs. 30 seconds for the first file
  JCA> using Aaron Watters' Python lint grammar written in Python.

Why not use the actual Python parser instead of tokenize.py?  I assume
it is also faster than Aaron's Python lint grammer written in Python.
The compiler in Tools/compiler uses the parser module internally and
produces an AST that is straightforward to use.  (The parse tree
produced by the parser module is fairly low-level.)

There was a thread (on the compiler-sig, I believe) where Moshe and I
noodled with a simple lint-like warnings framework based on the
compiler package.  I don't have the code we ended up with, but I found
an example checker in the mail archives and have included it below.
It checks for NameErrors.

I believe one useful change that Moshe and I arrived at was to avoid
the explicit stack that the code uses (via enterNamespace and
exitNamespace) and instead pass the namespace as an optional extra
argument to the visitXXX methods.

Jeremy

"""Check for NameErrors"""

from compiler import parseFile, walk
from compiler.misc import Stack, Set

import __builtin__
from UserDict import UserDict

class Warning:
    def __init__(self, filename, funcname, lineno):
        self.filename = filename
        self.funcname = funcname
        self.lineno = lineno

    def __str__(self):
        return self._template % self.__dict__

class UndefinedLocal(Warning):
    super_init = Warning.__init__
    
    def __init__(self, filename, funcname, lineno, name):
        self.super_init(filename, funcname, lineno)
        self.name = name

    _template = "%(filename)s:%(lineno)s  " \
                "%(funcname)s undefined local %(name)s"

class NameError(UndefinedLocal):
    _template = "%(filename)s:%(lineno)s  " \
                "%(funcname)s undefined name %(name)s"

class NameSet(UserDict):
    """Track names and the line numbers where they are referenced"""
    def __init__(self):
        self.data = self.names = {}

    def add(self, name, lineno):
        l = self.names.get(name, [])
        l.append(lineno)
        self.names[name] = l

class CheckNames:
    def __init__(self, filename):
        self.filename = filename
        self.warnings = []
        self.scope = Stack()
        self.gUse = NameSet()
        self.gDef = NameSet()
        # _locals is the stack of local namespaces
        # locals is the top of the stack
        self._locals = Stack()
        self.lUse = None
        self.lDef = None
        self.lGlobals = None # var declared global
        # holds scope,def,use,global triples for later analysis
        self.todo = []

    def enterNamespace(self, node):
        self.scope.push(node)
        self.lUse = use = NameSet()
        self.lDef = _def = NameSet()
        self.lGlobals = gbl = NameSet()
        self._locals.push((use, _def, gbl))

    def exitNamespace(self):
        self.todo.append((self.scope.top(), self.lDef, self.lUse,
                          self.lGlobals))
        self.scope.pop()
        self._locals.pop()
        if self._locals:
            self.lUse, self.lDef, self.lGlobals = self._locals.top()
        else:
            self.lUse = self.lDef = self.lGlobals = None

    def warn(self, warning, funcname, lineno, *args):
        args = (self.filename, funcname, lineno) + args
        self.warnings.append(apply(warning, args))

    def defName(self, name, lineno, local=1):
        if self.lUse is None:
            self.gDef.add(name, lineno)
        elif local == 0:
            self.gDef.add(name, lineno)
            self.lGlobals.add(name, lineno)
        else:
            self.lDef.add(name, lineno)

    def useName(self, name, lineno, local=1):
        if self.lUse is None:
            self.gUse.add(name, lineno)
        elif local == 0:
            self.gUse.add(name, lineno)
            self.lUse.add(name, lineno)            
        else:
            self.lUse.add(name, lineno)

    def check(self):
        for s, d, u, g in self.todo:
            self._check(s, d, u, g, self.gDef)
        # XXX then check the globals

    def _check(self, scope, _def, use, gbl, globals):
        # check for NameError
        # a name is defined iff it is in def.keys()
        # a name is global iff it is in gdefs.keys()
        gdefs = UserDict()
        gdefs.update(globals)
        gdefs.update(__builtin__.__dict__)
        defs = UserDict()
        defs.update(gdefs)
        defs.update(_def)
        errors = Set()
        for name in use.keys():
            if not defs.has_key(name):
                firstuse = use[name][0]
                self.warn(NameError, scope.name, firstuse, name)
                errors.add(name)

        # check for UndefinedLocalNameError
        # order == use & def sorted by lineno
        # elements are lineno, flag, name
        # flag = 0 if use, flag = 1 if def
        order = []
        for name, lines in use.items():
            if gdefs.has_key(name) and not _def.has_key(name):
                # this is a global ref, we can skip it
                continue
            for lineno in lines:
                order.append((lineno, 0, name))
        for name, lines in _def.items():
            for lineno in lines:
                order.append((lineno, 1, name))
        order.sort()
        # ready contains names that have been defined or warned about
        ready = Set()
        for lineno, flag, name in order:
            if flag == 0: # use
                if not ready.has_elt(name) and not errors.has_elt(name):
                    self.warn(UndefinedLocal, scope.name, lineno, name)
                    ready.add(name) # don't warn again
            else:
                ready.add(name)

    # below are visitor methods

    def visitFunction(self, node, noname=0):
        for expr in node.defaults:
            self.visit(expr)
        if not noname:
            self.defName(node.name, node.lineno)
        self.enterNamespace(node)
        for name in node.argnames:
            self.defName(name, node.lineno)
        self.visit(node.code)
        self.exitNamespace()
        return 1

    def visitLambda(self, node):
        return self.visitFunction(node, noname=1)

    def visitClass(self, node):
        for expr in node.bases:
            self.visit(expr)
        self.defName(node.name, node.lineno)
        self.enterNamespace(node)
        self.visit(node.code)
        self.exitNamespace()
        return 1

    def visitName(self, node):
        self.useName(node.name, node.lineno)

    def visitGlobal(self, node):
        for name in node.names:
            self.defName(name, node.lineno, local=0)

    def visitImport(self, node):
        for name, alias in node.names:
            self.defName(alias or name, node.lineno)

    visitFrom = visitImport

    def visitAssName(self, node):
        self.defName(node.name, node.lineno)
    
def check(filename):
    global p, checker
    p = parseFile(filename)
    checker = CheckNames(filename)
    walk(p, checker)
    checker.check()
    for w in checker.warnings:
        print w

if __name__ == "__main__":
    import sys

    # XXX need to do real arg processing
    check(sys.argv[1])



From nascheme@enme.ucalgary.ca  Mon Sep 25 15:57:42 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 25 Sep 2000 08:57:42 -0600
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925104009.A1747@ludwig.cnri.reston.va.us>; from Greg Ward on Mon, Sep 25, 2000 at 10:40:09AM -0400
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us>
Message-ID: <20000925085742.A26922@keymaster.enme.ucalgary.ca>

On Mon, Sep 25, 2000 at 10:40:09AM -0400, Greg Ward wrote:
> PCCTS 1.x (the precursor to ANTLR 2.x) is the only parser generator
> I've used personally

How different are PCCTS and ANTLR?  Perhaps we could use PCCTS for
CPython and ANTLR for JPython.

  Neil


From guido@beopen.com  Mon Sep 25 17:06:40 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 11:06:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules posixmodule.c,2.173,2.174
In-Reply-To: Your message of "Mon, 25 Sep 2000 15:56:17 +0200."
 <20000925155616.H20757@xs4all.nl>
References: <200009251322.GAA21574@slayer.i.sourceforge.net>
 <20000925155616.H20757@xs4all.nl>
Message-ID: <200009251606.LAA19626@cj20424-a.reston1.va.home.com>

> > Modified Files:
> > 	posixmodule.c 
> > Log Message:
> > Add missing prototypes for the benefit of SunOS 4.1.4 */
> 
> These should go in pyport.h ! Unless you have some reason not to export them
> to other file, but in that case we need to take a good look at the whole
> pyport.h thing.
> 
> > + #if defined(sun) && !defined(__SVR4)
> > + /* SunOS 4.1.4 doesn't have prototypes for these: */
> > + extern int rename(const char *, const char *);
> > + extern int pclose(FILE *);
> > + extern int fclose(FILE *);
> > + #endif
> > + 

Maybe, but tyere's already tons of platform specific junk in
posixmodule.c.  Given we're so close to the code freeze, let's not do
it right now.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jim@interet.com  Mon Sep 25 16:05:56 2000
From: jim@interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 11:05:56 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net>
Message-ID: <39CF69D4.E3649C69@interet.com>

"Barry A. Warsaw" wrote:
> I've been wanting to check out Antlr (www.antlr.org) because it gives
> us the /possibility/ to use the same grammar files for both CPython
> and JPython.  One problem though is that it generates Java and C++ so
> we'd be accepting our first C++ into the core if we went this route.

Yes, but why not YACC?  Is Antlr so much better, or is
YACC too primitive, or what?  IMHO, adding C++ just for
parsing is not going to happen, so Antlr is not going to
happen either.

JimA


From gward@mems-exchange.org  Mon Sep 25 16:07:53 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 11:07:53 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925085742.A26922@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Mon, Sep 25, 2000 at 08:57:42AM -0600
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us> <20000925085742.A26922@keymaster.enme.ucalgary.ca>
Message-ID: <20000925110752.A1891@ludwig.cnri.reston.va.us>

On 25 September 2000, Neil Schemenauer said:
> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS for
> CPython and ANTLR for JPython.

I can't speak from experience; I've only looked briefly at ANTLR.  But
it looks like they are as different as two LL(k) parser generators
written by the same guy can be.  Ie. same general philosophy, but not
much similar apart from that.

Also, to be blunt, the C back-end PCCTS 1.x has a lot of serious
problems.  It's heavily dependent on global variables, so goodbye to a
thread-safe lexer/parser.  It uses boatloads of tricky macros, which
makes debugging the lexer a bear.  It's well-nigh impossible to remember
which macros are defined in which .c files, which functions are defined
in which .h files, and so forth.  (No really! it's like that!)

I think it would be much healthier to take the sound OO thinking that
went into the original C++ back-end for PCCTS 1.x, and that evolved
further with the Java and C++ back-ends for ANTLR 2.x, and do the same
sort of stuff in C.  Writing good solid code in C isn't impossible, it's
just tricky.  And the code generated by PCCTS 1.x is *not* good solid C
code (IMHO).

        Greg


From cgw@fnal.gov  Mon Sep 25 16:12:35 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Mon, 25 Sep 2000 10:12:35 -0500 (CDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925085742.A26922@keymaster.enme.ucalgary.ca>
References: <39C8C834.5E3B90E7@lemburg.com>
 <20000925104009.A1747@ludwig.cnri.reston.va.us>
 <20000925085742.A26922@keymaster.enme.ucalgary.ca>
Message-ID: <14799.27491.414160.577996@buffalo.fnal.gov>

I think that as much as can be done with Python rather than using
external code like Antlr, the better.  Who cares if it is slow?  I
could imagine a 2-pass approach where the internal Python parser is
used to construct a parse tree which is then checked for certain
errors.  I wrote something like this to check for mismatched numbers
of '%' values and arguments in string-formatting operations (see
http://home.fnal.gov/~cgw/python/check_pct.html if you are
interested).

Only sections of code which cannot be parsed by Python's internal
parser would then need to be checked by the "stage 2" checker, which
could afford to give up speed for accuracy.  This is the part I think
should be done in Python... for all the reasons we like Python;
flexibility, maintainabilty, etc.




From bwarsaw@beopen.com  Mon Sep 25 16:23:40 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 25 Sep 2000 11:23:40 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com>
 <14799.24252.537090.326130@anthem.concentric.net>
 <20000925104310.B1747@ludwig.cnri.reston.va.us>
Message-ID: <14799.28156.687176.869540@anthem.concentric.net>

>>>>> "GW" == Greg Ward <gward@mems-exchange.org> writes:

    GW> Or contribute a C back-end to ANTLR -- I've been toying with
    GW> this idea for, ummm, too damn long now.  Years.

Yes (to both :).

>>>>> "NS" == Neil Schemenauer <nascheme@enme.ucalgary.ca> writes:

    NS> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS
    NS> for CPython and ANTLR for JPython.

Unknown.  It would only make sense if the same grammar files could be
fed to each.  I have no idea whether that's true or not.  If not,
Greg's idea is worth researching.

-Barry


From loewis@informatik.hu-berlin.de  Mon Sep 25 16:36:24 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 25 Sep 2000 17:36:24 +0200 (MET DST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CF69D4.E3649C69@interet.com> (jim@interet.com)
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com>
Message-ID: <200009251536.RAA26375@pandora.informatik.hu-berlin.de>

> Yes, but why not YACC?  Is Antlr so much better, or is
> YACC too primitive, or what?  IMHO, adding C++ just for
> parsing is not going to happen, so Antlr is not going to
> happen either.

I think the advantage that Barry saw is that ANTLR generates Java in
addition to C, so it could be used in JPython as well. In addition,
ANTLR is more advanced than YACC; it specifically supports full EBNF
as input, and has better mechanisms for conflict resolution.

On the YACC for Java side, Axel Schreiner has developed jay, see
http://www2.informatik.uni-osnabrueck.de/bernd/jay/staff/design/de/Artikel.htmld/
(if you read German, otherwise don't bother :-)

The main problem with multilanguage output is the semantic actions -
it would be quite a stunt to put semantic actions into the parser
which are valid both in C and Java :-) On that front, there is also
CUP (http://www.cs.princeton.edu/~appel/modern/java/CUP/), which has
different markup for Java actions ({: ... :}).

There is also BYACC/J, a patch to Berkeley Yacc to produce Java
(http://www.lincom-asg.com/~rjamison/byacc/).

Personally, I'm quite in favour of having the full parser source
(including parser generator if necessary) in the Python source
distribution. As a GCC contributor, I know what pain it is for users
that GCC requires bison to build - even though it is only required for
CVS builds, as distributions come with the generated files.

Regards,
Martin



From gward@mems-exchange.org  Mon Sep 25 17:22:35 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 12:22:35 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <14799.28156.687176.869540@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 25, 2000 at 11:23:40AM -0400
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <20000925104310.B1747@ludwig.cnri.reston.va.us> <14799.28156.687176.869540@anthem.concentric.net>
Message-ID: <20000925122235.A2167@ludwig.cnri.reston.va.us>

On 25 September 2000, Barry A. Warsaw said:
>     NS> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS
>     NS> for CPython and ANTLR for JPython.
> 
> Unknown.  It would only make sense if the same grammar files could be
> fed to each.  I have no idea whether that's true or not.  If not,
> Greg's idea is worth researching.

PCCTS 1.x grammar files tend to have lots of C code interwoven in them
-- at least for tricky, ill-defined grammars like BibTeX.  ;-)

ANTLR 2.x grammars certainly allow Java code to be woven into them; I
assume you can instead weave C++ or Sather if that's your preference.
Obviously, this would be one problem with having a common grammar for
JPython and CPython.

        Greg


From mal@lemburg.com  Mon Sep 25 17:39:22 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 18:39:22 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us>
Message-ID: <39CF7FBA.A54C40D@lemburg.com>

Greg Ward wrote:
> 
> On 20 September 2000, M.-A. Lemburg said:
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries
> > to continue as far as possible (much like make -k) ?
> >
> > If yes, could the existing Python parser/compiler be reused for
> > such a tool ?
> 
> >From what I understand of Python's parser and parser generator, no.
> Recovering from errors is indeed highly non-trivial.  If you're really
> interested, I'd look into Terence Parr's ANTLR -- it's a very fancy
> parser generator that's waaay ahead of pgen (or lex/yacc, for that
> matter).  ANTLR 2.x is highly Java-centric, and AFAIK doesn't yet have a
> C backend (grumble) -- just C++ and Java.  (Oh wait, the antlr.org web
> site says it can generate Sather too -- now there's an important
> mainstream language!  ;-)

Thanks, I'll have a look.
 
> Tech notes: like pgen, ANTLR is LL; it generates a recursive-descent
> parser.  Unlike pgen, ANTLR is LL(k) -- it can support arbitrary
> lookahead, although k>2 can make parser generation expensive (not
> parsing itself, just turning your grammar into code), as well as make
> your language harder to understand.  (I have a theory that pgen's k=1
> limitation has been a brick wall in the way of making Python's syntax
> more complex, i.e. it's a *feature*!)
> 
> More importantly, ANTLR has good support for error recovery.  My BibTeX
> parser has a lot of fun recovering from syntax errors, and (with a
> little smoke 'n mirrors magic in the lexing stage) does a pretty good
> job of it.  But you're right, it's *not* trivial to get this stuff
> right.  And without support from the parser generator, I suspect you
> would be in a world of hurtin'.

I was actually thinking of extracting the Python tokenizer and
parser from the Python source and tweaking it until it did
what I wanted it to do, ie. not generate valid code but produce
valid error messages ;-)

Now from the feedback I got it seems that this is not the
right approach. I'm not even sure whether using a parser
at all is the right way... I may have to stick to a fairly
general tokenizer and then try to solve the problem in chunks
of code (much like what Guido hinted at in his reply), possibly
even by doing trial and error using the Python builtin compiler
on these chunks.

Oh well,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fdrake@beopen.com  Mon Sep 25 18:04:18 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 13:04:18 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <200009251700.KAA27700@slayer.i.sourceforge.net>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
Message-ID: <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > fix bug #114290: when interpreter's argv[0] has a relative path make
 >     it absolute by joining it with getcwd result.  avoid including
 >     unnecessary ./ in path but do not test for ../ (more complicated)
...
 > +     else if (argv0_path[0] == '.') {
 > + 	getcwd(path, MAXPATHLEN);
 > + 	if (argv0_path[1] == '/') 
 > + 	    joinpath(path, argv0_path + 2);

  Did you test this when argv[0] is something like './/foo/bin/python'?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Mon Sep 25 18:18:21 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 19:18:21 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com>
Message-ID: <016e01c02714$f945bc20$766940d5@hagrid>

in response to a OS X compiler problem, mal wrote:
> You could try to enable the macro at the top of unicodectype.c:
>  
> #if defined(macintosh) || defined(MS_WIN64)
> /*XXX This was required to avoid a compiler error for an early Win64
>  * cross-compiler that was used for the port to Win64. When the platform is
>  * released the MS_WIN64 inclusion here should no longer be necessary.
>  */
> /* This probably needs to be defined for some other compilers too. It breaks the
> ** 5000-label switch statement up into switches with around 1000 cases each.
> */
> #define BREAK_SWITCH_UP return 1; } switch (ch) {
> #else
> #define BREAK_SWITCH_UP /* nothing */
> #endif
> 
> If it does compile with the work-around enabled, please
> give us a set of defines which identify the compiler and
> platform so we can enable it per default for your setup.

I have a 500k "negative patch" sitting on my machine which removes
most of unicodectype.c, replacing it with a small data table (based on
the same unidb work as yesterdays unicodedatabase patch).

out
</F>

# dump all known unicode data

import unicodedata

for i in range(65536):
    char = unichr(i)
    data = (
        # ctype predicates
        char.isalnum(),
        char.isalpha(),
        char.isdecimal(),
        char.isdigit(),
        char.islower(),
        char.isnumeric(),
        char.isspace(),
        char.istitle(),
        char.isupper(),
        # ctype mappings
        char.lower(),
        char.upper(),
        char.title(),
        # properties
        unicodedata.digit(char, None),
        unicodedata.numeric(char, None),
        unicodedata.decimal(char, None),
        unicodedata.category(char),
        unicodedata.bidirectional(char),
        unicodedata.decomposition(char),
        unicodedata.mirrored(char),
        unicodedata.combining(char)
        )




From Fredrik Lundh" <effbot@telia.com  Mon Sep 25 18:27:19 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 19:27:19 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid>
Message-ID: <017801c02715$ebcc38c0$766940d5@hagrid>

oops.  mailer problem; here's the rest of the mail:

> I have a 500k "negative patch" sitting on my machine which removes
> most of unicodectype.c, replacing it with a small data table (based on
> the same unidb work as yesterdays unicodedatabase patch).

(this shaves another another 400-500k off the source distribution,
and 10-20k in the binaries...)

I've verified that all ctype-related methods eturn the same result
as before the patch, for all characters in the unicode set (see the
attached script).

should I check it in?

</F>



From mal@lemburg.com  Mon Sep 25 18:46:21 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 19:46:21 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python
 Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid>
Message-ID: <39CF8F6D.3F32C8FD@lemburg.com>

Fredrik Lundh wrote:
> 
> oops.  mailer problem; here's the rest of the mail:
> 
> > I have a 500k "negative patch" sitting on my machine which removes
> > most of unicodectype.c, replacing it with a small data table (based on
> > the same unidb work as yesterdays unicodedatabase patch).
> 
> (this shaves another another 400-500k off the source distribution,
> and 10-20k in the binaries...)
> 
> I've verified that all ctype-related methods eturn the same result
> as before the patch, for all characters in the unicode set (see the
> attached script).
> 
> should I check it in?

Any chance of taking a look at it first ? (BTW, what happened to the
usual post to SF, review, then checkin cycle ?)

The C type checks are a little performance sensitive since they
are used on a char by char basis in the C implementation of
.upper(), etc. -- do the new methods give the same performance ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Mon Sep 25 18:55:49 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 13:55:49 -0400
Subject: [Python-Dev] last second patches (was: regarding the Python  Developer posting...)
In-Reply-To: <39CF8F6D.3F32C8FD@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEOKHHAA.tim_one@email.msn.com>

[M.-A. Lemburg, on /F's Unicode patches]
> Any chance of taking a look at it first ? (BTW, what happened to the
> usual post to SF, review, then checkin cycle ?)

I encouraged /F *not* to submit a patch for the unicodedatabase.c change.
He knows what he's doing, experts in an area are allowed (see PEP200) to
skip the patch business, and we're trying to make quick progress before
2.0b2 ships.

This change may be more controversial, though:

> The C type checks are a little performance sensitive since they
> are used on a char by char basis in the C implementation of
> .upper(), etc. -- do the new methods give the same performance ?

Don't know.  Although it's hard to imagine we have any Unicode apps out
there now that will notice one way or the other <wink>.




From Fredrik Lundh" <effbot@telia.com  Mon Sep 25 19:08:22 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 20:08:22 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python  Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>
Message-ID: <003601c0271c$1b814c80$766940d5@hagrid>

mal wrote:
> Any chance of taking a look at it first ?

same as unicodedatabase.c, just other data.

> (BTW, what happened to the usual post to SF, review, then
> checkin cycle ?)

two problems: SF cannot handle patches larger than 500k.
and we're in ship mode...

> The C type checks are a little performance sensitive since they
> are used on a char by char basis in the C implementation of
> .upper(), etc. -- do the new methods give the same performance ?

well, they're about 40% faster on my box.  ymmv, of course.

</F>



From gward@mems-exchange.org  Mon Sep 25 19:05:12 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 14:05:12 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009251536.RAA26375@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Mon, Sep 25, 2000 at 05:36:24PM +0200
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
Message-ID: <20000925140511.A2319@ludwig.cnri.reston.va.us>

On 25 September 2000, Martin von Loewis said:
> Personally, I'm quite in favour of having the full parser source
> (including parser generator if necessary) in the Python source
> distribution. As a GCC contributor, I know what pain it is for users
> that GCC requires bison to build - even though it is only required for
> CVS builds, as distributions come with the generated files.

This would be a strike against ANTLR, since it's written in Java -- and
therefore is about as portable as a church.  ;-(

It should be possible to generate good, solid, portable C code... but
AFAIK no one has done so to date with ANTLR 2.x.

        Greg


From jeremy@beopen.com  Mon Sep 25 19:11:12 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 14:11:12 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
 <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
Message-ID: <14799.38208.987507.250305@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake@beopen.com> writes:

  FLD> Did you test this when argv[0] is something like
  FLD> './/foo/bin/python'? 

No.  Two questions: What would that mean? How could I generate it?

Jeremy




From fdrake@beopen.com  Mon Sep 25 19:07:00 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 14:07:00 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.38208.987507.250305@bitdiddle.concentric.net>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
 <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
 <14799.38208.987507.250305@bitdiddle.concentric.net>
Message-ID: <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 >   FLD> Did you test this when argv[0] is something like
 >   FLD> './/foo/bin/python'? 
 > 
 > No.  Two questions: What would that mean? How could I generate it?

  That should mean the same as './foo/bin/python' since multiple '/'
are equivalent to a single '/' on Unix.  (Same for r'\' on Windows
since this won't interfere with UNC paths (like '\\host\foo\bin...')).
  You can do this using fork/exec.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From jeremy@beopen.com  Mon Sep 25 19:20:20 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 14:20:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
 <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
 <14799.38208.987507.250305@bitdiddle.concentric.net>
 <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
Message-ID: <14799.38756.174565.664691@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake@beopen.com> writes:

  FLD> Jeremy Hylton writes: Did you test this when argv[0] is
  FLD> something like './/foo/bin/python'?
  >>
  >> No.  Two questions: What would that mean? How could I generate
  >> it?

  FLD>   That should mean the same as './foo/bin/python' since
  FLD>   multiple '/' are equivalent to a single '/' on Unix.

Ok.  Tested with os.execv and it works correctly.

Did you see my query (in private email) about 1) whether it works on
Windows and 2) whether I should worry about platforms that don't have
a valid getcwd?

Jeremy




From Fredrik Lundh" <effbot@telia.com  Mon Sep 25 19:26:16 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 20:26:16 +0200
Subject: [Python-Dev] CVS problems
References: <200009251700.KAA27700@slayer.i.sourceforge.net><14799.34194.855026.395907@cj42289-a.reston1.va.home.com><14799.38208.987507.250305@bitdiddle.concentric.net> <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
Message-ID: <006c01c0271e$1a72b0c0$766940d5@hagrid>

> cvs add Objects\unicodetype_db.h
cvs server: scheduling file `Objects/unicodetype_db.h' for addition
cvs server: use 'cvs commit' to add this file permanently

> cvs commit Objects\unicodetype_db.h
cvs server: [11:05:10] waiting for anoncvs_python's lock in /cvsroot/python/python/dist/src/Objects

yet another stale lock?  if so, what happened?  and more
importantly, how do I get rid of it?

</F>



From thomas@xs4all.net  Mon Sep 25 19:23:22 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 20:23:22 +0200
Subject: [Python-Dev] CVS problems
In-Reply-To: <006c01c0271e$1a72b0c0$766940d5@hagrid>; from effbot@telia.com on Mon, Sep 25, 2000 at 08:26:16PM +0200
References: <200009251700.KAA27700@slayer.i.sourceforge.net><14799.34194.855026.395907@cj42289-a.reston1.va.home.com><14799.38208.987507.250305@bitdiddle.concentric.net> <14799.37956.408416.190160@cj42289-a.reston1.va.home.com> <006c01c0271e$1a72b0c0$766940d5@hagrid>
Message-ID: <20000925202322.I20757@xs4all.nl>

On Mon, Sep 25, 2000 at 08:26:16PM +0200, Fredrik Lundh wrote:
> > cvs add Objects\unicodetype_db.h
> cvs server: scheduling file `Objects/unicodetype_db.h' for addition
> cvs server: use 'cvs commit' to add this file permanently
> 
> > cvs commit Objects\unicodetype_db.h
> cvs server: [11:05:10] waiting for anoncvs_python's lock in /cvsroot/python/python/dist/src/Objects
> 
> yet another stale lock?  if so, what happened?  and more
> importantly, how do I get rid of it?

This might not be a stale lock. Because it's anoncvs's lock, it can't be a
write lock. I've seen this before (mostly on checking out) and it does take
quite a bit for the CVS process to continue :P But in my cases, eventually
it did. If it stays longer than, say, 30m, it's probably
SF-bug-reporting-time again :P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Mon Sep 25 19:24:25 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 14:24:25 -0400
Subject: [Python-Dev] CVS problems
In-Reply-To: <006c01c0271e$1a72b0c0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>

[Fredrik Lundh]
> > cvs add Objects\unicodetype_db.h
> cvs server: scheduling file `Objects/unicodetype_db.h' for addition
> cvs server: use 'cvs commit' to add this file permanently
>
> > cvs commit Objects\unicodetype_db.h
> cvs server: [11:05:10] waiting for anoncvs_python's lock in
> /cvsroot/python/python/dist/src/Objects
>
> yet another stale lock?  if so, what happened?  and more
> importantly, how do I get rid of it?

I expect this one goes away by itself -- anoncvs can't be doing a commit,
and I don't believe we've ever seen a stale lock from anoncvs.  Probably
just some fan doing their first read-only checkout over a slow line.  BTW, I
just did a full update & didn't get any lock msgs.  Try again!




From Fredrik Lundh" <effbot@telia.com  Mon Sep 25 20:04:26 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 21:04:26 +0200
Subject: [Python-Dev] CVS problems
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
Message-ID: <00bc01c02723$6f8faf40$766940d5@hagrid>

tim wrote:> > > cvs commit Objects\unicodetype_db.h
> > cvs server: [11:05:10] waiting for anoncvs_python's lock in
> > /cvsroot/python/python/dist/src/Objects
> >
> I expect this one goes away by itself -- anoncvs can't be doing a commit,
> and I don't believe we've ever seen a stale lock from anoncvs.  Probably
> just some fan doing their first read-only checkout over a slow line.

I can update alright, but I still get this message when I try
to commit stuff.  this message, or timeouts from the server.

annoying...

</F>



From guido@beopen.com  Mon Sep 25 21:21:11 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 15:21:11 -0500
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
In-Reply-To: Your message of "Mon, 25 Sep 2000 20:08:22 +0200."
 <003601c0271c$1b814c80$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>
 <003601c0271c$1b814c80$766940d5@hagrid>
Message-ID: <200009252021.PAA20146@cj20424-a.reston1.va.home.com>

> mal wrote:
> > Any chance of taking a look at it first ?
> 
> same as unicodedatabase.c, just other data.
> 
> > (BTW, what happened to the usual post to SF, review, then
> > checkin cycle ?)
> 
> two problems: SF cannot handle patches larger than 500k.
> and we're in ship mode...
> 
> > The C type checks are a little performance sensitive since they
> > are used on a char by char basis in the C implementation of
> > .upper(), etc. -- do the new methods give the same performance ?
> 
> well, they're about 40% faster on my box.  ymmv, of course.

Fredrik, why don't you make your patch available for review by
Marc-Andre -- after all he "owns" this code (is the original author).
If Marc-Andre agrees, and Jeremy has enough time to finish the release
on time, I have no problem with checking it in.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Mon Sep 25 21:02:25 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 16:02:25 -0400 (EDT)
Subject: [Python-Dev] CVS problems
In-Reply-To: <00bc01c02723$6f8faf40$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
 <00bc01c02723$6f8faf40$766940d5@hagrid>
Message-ID: <14799.44881.753935.662313@bitdiddle.concentric.net>

>>>>> "FL" == Fredrik Lundh <effbot@telia.com> writes:

  FL>> cvs commit Objects\unicodetype_db.h
  >> > cvs server: [11:05:10] waiting for anoncvs_python's lock in
  >> > /cvsroot/python/python/dist/src/Objects
  >> >
  [tim wrote:]
  >> I expect this one goes away by itself -- anoncvs can't be doing a
  >> commit, and I don't believe we've ever seen a stale lock from
  >> anoncvs.  Probably just some fan doing their first read-only
  >> checkout over a slow line.

  FL> I can update alright, but I still get this message when I try to
  FL> commit stuff.  this message, or timeouts from the server.

  FL> annoying...

It's still there now, about an hour later.  I can't even tag the tree
with the r20b2 marker, of course.

How do we submit an SF admin request?

Jeremy


From Fredrik Lundh" <effbot@telia.com  Mon Sep 25 21:31:06 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 22:31:06 +0200
Subject: [Python-Dev] CVS problems
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com><00bc01c02723$6f8faf40$766940d5@hagrid> <14799.44881.753935.662313@bitdiddle.concentric.net>
Message-ID: <006901c0272f$ce106120$766940d5@hagrid>

jeremy wrote:

> It's still there now, about an hour later.  I can't even tag the tree
> with the r20b2 marker, of course.
> 
> How do we submit an SF admin request?

I've already submitted a support request.  not that anyone
seems to be reading them, though -- the oldest unassigned
request is from September 19th...

anyone knows anyone at sourceforge?

</F>



From Fredrik Lundh" <effbot@telia.com  Mon Sep 25 21:49:47 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 22:49:47 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>              <003601c0271c$1b814c80$766940d5@hagrid>  <200009252021.PAA20146@cj20424-a.reston1.va.home.com>
Message-ID: <008101c02732$29fbf4c0$766940d5@hagrid>

> Fredrik, why don't you make your patch available for review by
> Marc-Andre -- after all he "owns" this code (is the original author).

hey, *I* wrote the original string type, didn't I? ;-)

anyway, the new unicodectype.c file is here:
http://sourceforge.net/patch/download.php?id=101652

(the patch is 500k, the new file 14k)

the new data file is here:
http://sourceforge.net/patch/download.php?id=101653

the new generator script is already in the repository
(Tools/unicode/makeunicodedata.py)

</F>



From fdrake@beopen.com  Mon Sep 25 21:39:35 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 16:39:35 -0400 (EDT)
Subject: [Python-Dev] CVS problems
In-Reply-To: <006901c0272f$ce106120$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
 <00bc01c02723$6f8faf40$766940d5@hagrid>
 <14799.44881.753935.662313@bitdiddle.concentric.net>
 <006901c0272f$ce106120$766940d5@hagrid>
Message-ID: <14799.47111.674769.204798@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > anyone knows anyone at sourceforge?

  I'll send an email.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From jim@interet.com  Mon Sep 25 21:48:28 2000
From: jim@interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 16:48:28 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
Message-ID: <39CFBA1C.3E05B760@interet.com>

Martin von Loewis wrote:
> 
>> Yes, but why not YACC?  Is Antlr so much better, or is

> I think the advantage that Barry saw is that ANTLR generates Java in
> addition to C, so it could be used in JPython as well. In addition,
> ANTLR is more advanced than YACC; it specifically supports full EBNF
> as input, and has better mechanisms for conflict resolution.

Oh, OK.  Thanks.
 
> Personally, I'm quite in favour of having the full parser source
> (including parser generator if necessary) in the Python source
> distribution. As a GCC contributor, I know what pain it is for users
> that GCC requires bison to build - even though it is only required for
> CVS builds, as distributions come with the generated files.

I see your point, but the practical solution that we can
do today is to use YACC, bison, and distribute the generated
parser files.

Jim


From jeremy@beopen.com  Mon Sep 25 22:14:02 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 17:14:02 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CFBA1C.3E05B760@interet.com>
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com>
 <14799.24252.537090.326130@anthem.concentric.net>
 <39CF69D4.E3649C69@interet.com>
 <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
 <39CFBA1C.3E05B760@interet.com>
Message-ID: <14799.49178.2354.77727@bitdiddle.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim@interet.com> writes:

  >> Personally, I'm quite in favour of having the full parser source
  >> (including parser generator if necessary) in the Python source
  >> distribution. As a GCC contributor, I know what pain it is for
  >> users that GCC requires bison to build - even though it is only
  >> required for CVS builds, as distributions come with the generated
  >> files.

  JCA> I see your point, but the practical solution that we can do
  JCA> today is to use YACC, bison, and distribute the generated
  JCA> parser files.

I don't understand what problem this is a practical solution to.
This thread started with MAL's questions about finding errors in
Python code.  You mentioned an effort to write a lint-like tool.
It may be that YACC has great support for error recovery, in which
case MAL might want to look at for his tool.

But in general, the most practical solution for parsing Python is
probably to use the Python parser and the builtin parser module.  It
already exists and seems to work just fine.

Jeremy


From thomas@xs4all.net  Mon Sep 25 22:27:01 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 23:27:01 +0200
Subject: [Python-Dev] CVS problems
In-Reply-To: <006901c0272f$ce106120$766940d5@hagrid>; from effbot@telia.com on Mon, Sep 25, 2000 at 10:31:06PM +0200
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com><00bc01c02723$6f8faf40$766940d5@hagrid> <14799.44881.753935.662313@bitdiddle.concentric.net> <006901c0272f$ce106120$766940d5@hagrid>
Message-ID: <20000925232701.J20757@xs4all.nl>

On Mon, Sep 25, 2000 at 10:31:06PM +0200, Fredrik Lundh wrote:
> jeremy wrote:

> > It's still there now, about an hour later.  I can't even tag the tree
> > with the r20b2 marker, of course.
> > 
> > How do we submit an SF admin request?
> 
> I've already submitted a support request.  not that anyone
> seems to be reading them, though -- the oldest unassigned
> request is from September 19th...

> anyone knows anyone at sourceforge?

I've had good results mailing 'staff@sourceforge.net' -- but only in real
emergencies (one of the servers was down, at the time.) That isn't to say
you or someone else shouldn't use it now (it's delaying the beta, after all,
which is kind of an emergency) but I just can't say how fast they'll respond
to such a request :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Mon Sep 25 22:33:27 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 17:33:27 -0400
Subject: [Python-Dev] CVS problems
In-Reply-To: <20000925232701.J20757@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPPHHAA.tim_one@email.msn.com>

The CVS problem has been fixed.




From mal@lemburg.com  Mon Sep 25 23:35:34 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 26 Sep 2000 00:35:34 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python
 Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com> <003601c0271c$1b814c80$766940d5@hagrid>
Message-ID: <39CFD336.C5B6DB4D@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > The C type checks are a little performance sensitive since they
> > are used on a char by char basis in the C implementation of
> > .upper(), etc. -- do the new methods give the same performance ?
> 
> well, they're about 40% faster on my box.  ymmv, of course.

Hmm, I get a 1% performance downgrade on Linux using pgcc, but
in the end its a win anyways :-)

What remains are the nits I posted to SF.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@beopen.com  Tue Sep 26 02:44:58 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 20:44:58 -0500
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: Your message of "Mon, 25 Sep 2000 17:14:02 -0400."
 <14799.49178.2354.77727@bitdiddle.concentric.net>
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de> <39CFBA1C.3E05B760@interet.com>
 <14799.49178.2354.77727@bitdiddle.concentric.net>
Message-ID: <200009260144.UAA25752@cj20424-a.reston1.va.home.com>

> I don't understand what problem this is a practical solution to.
> This thread started with MAL's questions about finding errors in
> Python code.  You mentioned an effort to write a lint-like tool.
> It may be that YACC has great support for error recovery, in which
> case MAL might want to look at for his tool.
> 
> But in general, the most practical solution for parsing Python is
> probably to use the Python parser and the builtin parser module.  It
> already exists and seems to work just fine.

Probably not that relevant any more, but MAL originally asked for a
parser that doesn't stop at the first error.  That's a real weakness
of the existing parser!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From greg@cosc.canterbury.ac.nz  Tue Sep 26 02:13:19 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 26 Sep 2000 13:13:19 +1200 (NZST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009260144.UAA25752@cj20424-a.reston1.va.home.com>
Message-ID: <200009260113.NAA23556@s454.cosc.canterbury.ac.nz>

Guido:

> MAL originally asked for a
> parser that doesn't stop at the first error.  That's a real weakness
> of the existing parser!!!

Is it really worth putting a lot of effort into this?
In my experience, the vast majority of errors I get from
Python are run-time errors, not parse errors.

(If you could find multiple run-time errors in one go,
*that* would be an impressive trick!)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From mwh21@cam.ac.uk  Tue Sep 26 13:15:26 2000
From: mwh21@cam.ac.uk (Michael Hudson)
Date: Tue, 26 Sep 2000 13:15:26 +0100 (BST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009260113.NAA23556@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.SOL.4.21.0009261309240.22922-100000@yellow.csi.cam.ac.uk>

On Tue, 26 Sep 2000, Greg Ewing wrote:

> Guido:
> 
> > MAL originally asked for a
> > parser that doesn't stop at the first error.  That's a real weakness
> > of the existing parser!!!
> 
> Is it really worth putting a lot of effort into this?

It might be if you were trying to develop an IDE that could syntactically
analyse what the user was typing even if he/she had left a half finished
expression further up in the buffer (I'd kind of assumed this was the
goal).  So you're not continuing after errors, exactly, more like
unfinishednesses (or some better word...).

I guess one approach to this would be to divided up the buffer according
to indentation and then parse each block as delimited by the indentation
individually.

Two random points:

1) Triple-quoted strings are going to be a problem.
2) Has anyone gotten flex to tokenize Python?  I was looking at the manual
   yesterday and it didn't look impossible, although a bit tricky.

Cheers,
M.



From jim@interet.com  Tue Sep 26 14:23:47 2000
From: jim@interet.com (James C. Ahlstrom)
Date: Tue, 26 Sep 2000 09:23:47 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com>
 <14799.24252.537090.326130@anthem.concentric.net>
 <39CF69D4.E3649C69@interet.com>
 <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
 <39CFBA1C.3E05B760@interet.com> <14799.49178.2354.77727@bitdiddle.concentric.net>
Message-ID: <39D0A363.2DE02593@interet.com>

Jeremy Hylton wrote:

> I don't understand what problem this is a practical solution to.

To recover from errors better by using YACC's built-in error
recovery features.  Maybe unifying the C and Java parsers.  I
admit I don't know how J-Python parses Python.

I kind of threw in my objection to tokenize.py which should be
combined with tokenizer.c.  Of course it is work which only
results in the same operation as before, but reduces the code
base.  Not a popular project.

> But in general, the most practical solution for parsing Python is
> probably to use the Python parser and the builtin parser module.  It
> already exists and seems to work just fine.

A very good point.  I am not 100% sure it is worth it.  But I
found the current parser unworkable for my project.

JimA


From bwarsaw@beopen.com  Tue Sep 26 15:43:24 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 26 Sep 2000 10:43:24 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
 <39CF596C.17BA4DC5@interet.com>
 <14799.24252.537090.326130@anthem.concentric.net>
 <39CF69D4.E3649C69@interet.com>
 <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
 <39CFBA1C.3E05B760@interet.com>
 <14799.49178.2354.77727@bitdiddle.concentric.net>
 <39D0A363.2DE02593@interet.com>
Message-ID: <14800.46604.587756.479012@anthem.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim@interet.com> writes:

    JCA> To recover from errors better by using YACC's built-in error
    JCA> recovery features.  Maybe unifying the C and Java parsers.  I
    JCA> admit I don't know how J-Python parses Python.

It uses JavaCC.

http://www.metamata.com/javacc/

-Barry


From thomas@xs4all.net  Tue Sep 26 19:20:53 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 26 Sep 2000 20:20:53 +0200
Subject: [Python-Dev] [OT] ApacheCon 2000
Message-ID: <20000926202053.K20757@xs4all.nl>

I'm (off-topicly) wondering if anyone here is going to the Apache Conference
in London, october 23-25, and how I'm going to recognize them (My PythonLabs
shirt will probably not last more than a day, and I don't have any other
python-related shirts ;) 

I'm also wondering if anyone knows a halfway-decent hotel somewhat near the
conference site (Olympia Conference Centre, Kensington). I have a
reservation at the Hilton, but it's bloody expensive and damned hard to deal
with, over the phone. I don't mind the price (boss pays) but I'd think
they'd not treat potential customers like village idiots ;P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jeremy@beopen.com  Tue Sep 26 20:01:27 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 15:01:27 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
Message-ID: <14800.62087.617722.272109@bitdiddle.concentric.net>

We have tar balls and RPMs available on our private FTP site,
python.beopen.com.  If you have a chance to test these on your
platform in the next couple of hours, feedback would be appreciated.
We've tested on FreeBSD and RH and Mandrake Linux.

What we're most interested in hearing about is whether it builds
cleanly and runs the regression test.

The actual release will occur later today from pythonlabs.com.

Jeremy


From fdrake@beopen.com  Tue Sep 26 20:43:42 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 15:43:42 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > We have tar balls and RPMs available on our private FTP site,
 > python.beopen.com.  If you have a chance to test these on your
 > platform in the next couple of hours, feedback would be appreciated.
 > We've tested on FreeBSD and RH and Mandrake Linux.

  I've just built & tested on Caldera 2.3 on the SourceForge compile
farm, and am getting some failures.  If anyone who knows Caldera can
figure these out, that would be great (I'll turn them into proper bug
reports later).
  The failing tests are for fcntl, openpty, and pty.  Here's the
output of regrtest -v for those tests:

bash$ ./python -tt ../Lib/test/regrtest.py -v test_{fcntl,openpty,pty}
test_fcntl
test_fcntl
Status from fnctl with O_NONBLOCK:  0
struct.pack:  '\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'test test_fcntl crashed -- exceptions.IOError: [Errno 37] No locks available
Traceback (most recent call last):
  File "../Lib/test/regrtest.py", line 235, in runtest
    __import__(test, globals(), locals(), [])
  File "../Lib/test/test_fcntl.py", line 31, in ?
    rv = fcntl.fcntl(f.fileno(), FCNTL.F_SETLKW, lockdata)
IOError: [Errno 37] No locks available
test_openpty
test_openpty
Calling os.openpty()
test test_openpty crashed -- exceptions.OSError: [Errno 2] No such file or directory
Traceback (most recent call last):
  File "../Lib/test/regrtest.py", line 235, in runtest
    __import__(test, globals(), locals(), [])
  File "../Lib/test/test_openpty.py", line 9, in ?
    master, slave = os.openpty()
OSError: [Errno 2] No such file or directory
test_pty
test_pty
Calling master_open()
Got master_fd '5', slave_name '/dev/ttyp0'
Calling slave_open('/dev/ttyp0')
test test_pty skipped --  Pseudo-terminals (seemingly) not functional.
2 tests failed: test_fcntl test_openpty
1 test skipped: test_pty


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Tue Sep 26 21:05:13 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:05:13 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <004901c027f5$1d743640$766940d5@hagrid>

jeremy wrote:

> We have tar balls and RPMs available on our private FTP site,
> python.beopen.com.  If you have a chance to test these on your
> platform in the next couple of hours, feedback would be appreciated.
> We've tested on FreeBSD and RH and Mandrake Linux.

is the windows installer up to date?

I just grabbed it, only to get a "corrupt installation detected" message
box (okay, I confess: I do have a PythonWare distro installed, but may-
be you could use a slightly more polite message? ;-)

</F>



From tim_one@email.msn.com  Tue Sep 26 20:59:34 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 15:59:34 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDGHIAA.tim_one@email.msn.com>

[Jeremy Hylton]
> We have tar balls and RPMs available on our private FTP site,
> python.beopen.com.

I think he meant to add under /pub/tmp/.  In any case, that's where the
2.0b2 Windows installer is now:

    BeOpen-Python-2.0b2.exe
    5,667,334 bytes
    SHA digest:  4ec69734d9931f5b83b391b2a9606c2d4e793428

> If you have a chance to test these on your platform in the next
> couple of hours, feedback would be appreciated.  We've tested on
> FreeBSD and RH and Mandrake Linux.

Would also be cool if at least one person other than me tried the Windows
installer.  I usually pick on Guido for this (just as he used to pick on
me), but, alas, he's somewhere in transit mid-continent.

executives!-ly y'rs  - tim




From jeremy@beopen.com  Tue Sep 26 21:05:44 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 16:05:44 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <004901c027f5$1d743640$766940d5@hagrid>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
 <004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <14801.408.372215.493355@bitdiddle.concentric.net>

>>>>> "FL" == Fredrik Lundh <effbot@telia.com> writes:

  FL> jeremy wrote:
  >> We have tar balls and RPMs available on our private FTP site,
  >> python.beopen.com.  If you have a chance to test these on your
  >> platform in the next couple of hours, feedback would be
  >> appreciated.  We've tested on FreeBSD and RH and Mandrake Linux.

  FL> is the windows installer up to date?

No.  Tim has not done the Windows installer yet.  It's coming...

  FL> I just grabbed it, only to get a "corrupt installation detected"
  FL> message box (okay, I confess: I do have a PythonWare distro
  FL> installed, but may- be you could use a slightly more polite
  FL> message? ;-)

Did you grab the 2.0b1 exe?  I would not be surprised if the one in
/pub/tmp did not work.  It's probably an old pre-release version of
the beta 1 Windows installer.

Jeremy




From tim_one@email.msn.com  Tue Sep 26 21:01:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:01:23 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDHHIAA.tim_one@email.msn.com>

[/F]
> is the windows installer up to date?
>
> I just grabbed it, only to get a "corrupt installation detected" message
> box (okay, I confess: I do have a PythonWare distro installed, but may-
> be you could use a slightly more polite message? ;-)

I'm pretty sure you grabbed it while the scp from my machine was still in
progress.  Try it again!  While BeOpen.com has no official policy toward
PythonWare, I think it's cool.




From tim_one@email.msn.com  Tue Sep 26 21:02:48 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:02:48 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.408.372215.493355@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEDIHIAA.tim_one@email.msn.com>

All the Windows installers under /pub/tmp/ should work fine.  Although only
2.0b2 should be of any interest to anyone anymore.




From fdrake@beopen.com  Tue Sep 26 21:05:19 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 16:05:19 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
 <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
Message-ID: <14801.383.799094.8428@cj42289-a.reston1.va.home.com>

Fred L. Drake, Jr. writes:
 >   I've just built & tested on Caldera 2.3 on the SourceForge compile
 > farm, and am getting some failures.  If anyone who knows Caldera can
 > figure these out, that would be great (I'll turn them into proper bug
 > reports later).
 >   The failing tests are for fcntl, openpty, and pty.  Here's the
 > output of regrtest -v for those tests:

  These same tests fail in what appears to be the same way on SuSE 6.3
(using the SourceForge compile farm).  Does anyone know the vagaries
of Linux libc versions enough to tell if this is a libc5/glibc6
difference?  Or a difference in kernel versions?
  On to Slackware...


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Tue Sep 26 21:08:09 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:08:09 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <000001c027f7$e0915480$766940d5@hagrid>

I wrote:
> I just grabbed it, only to get a "corrupt installation detected" message
> box (okay, I confess: I do have a PythonWare distro installed, but may-
> be you could use a slightly more polite message? ;-)

nevermind; the size of the file keeps changing on the site, so
I guess someone's uploading it (over and over again?)

</F>



From nascheme@enme.ucalgary.ca  Tue Sep 26 21:16:10 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Tue, 26 Sep 2000 14:16:10 -0600
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.383.799094.8428@cj42289-a.reston1.va.home.com>; from Fred L. Drake, Jr. on Tue, Sep 26, 2000 at 04:05:19PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com> <14801.383.799094.8428@cj42289-a.reston1.va.home.com>
Message-ID: <20000926141610.A6557@keymaster.enme.ucalgary.ca>

On Tue, Sep 26, 2000 at 04:05:19PM -0400, Fred L. Drake, Jr. wrote:
>   These same tests fail in what appears to be the same way on SuSE 6.3
> (using the SourceForge compile farm).  Does anyone know the vagaries
> of Linux libc versions enough to tell if this is a libc5/glibc6
> difference?  Or a difference in kernel versions?

I don't know much but having the output from "uname -a" and "ldd python"
could be helpful (ie. which kernel and which libc).

  Neil


From tim_one@email.msn.com  Tue Sep 26 21:17:52 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:17:52 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <000001c027f7$e0915480$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDJHIAA.tim_one@email.msn.com>

> nevermind; the size of the file keeps changing on the site, so
> I guess someone's uploading it (over and over again?)

No, I uploaded it exactly once, but it took over an hour to complete
uploading.  That's done now.  If it *still* fails for you, then gripe.  You
simply jumped the gun by grabbing it before anyone said it was ready.




From fdrake@beopen.com  Tue Sep 26 21:32:21 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 16:32:21 -0400 (EDT)
Subject: [Python-Dev] 2.0b2 on Slackware 7.0
Message-ID: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>

  I just built and tested 2.0b2 on Slackware 7.0, and found that
threads failed miserably.  I got the message:

pthread_cond_wait: Interrupted system call

over & over (*hundreds* of times before I killed it) during one of the
tests (test_fork1.py? it scrolled out of the scollback buffer, 2000
lines).  If I configure it --without-threads it works great.  Unless
you need threads.

uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001c000)
	libdl.so.2 => /lib/libdl.so.2 (0x40056000)
	libutil.so.1 => /lib/libutil.so.1 (0x4005a000)
	libm.so.6 => /lib/libm.so.6 (0x4005d000)
	libc.so.6 => /lib/libc.so.6 (0x4007a000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

  If anyone has any ideas, please send them along!  I'll turn this
into a real bug report later.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Tue Sep 26 21:48:49 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:48:49 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid> <000001c027f7$e0915480$766940d5@hagrid>
Message-ID: <005901c027fb$2ecf8380$766940d5@hagrid>

> nevermind; the size of the file keeps changing on the site, so
> I guess someone's uploading it (over and over again?)

heh.  just discovered that my ISP has introduced a new
policy: if you send stupid messages, we'll knock you off
the net for 30 minutes...

anyway, I've now downloaded the installer, and it works
pretty well...

:::

just one weird thing:

according to dir, I have 41 megs on my C: disk before
running the installer...

according to the installer, I have 22.3 megs, but Python
only requires 18.3 megs, so it should be okay...

but a little later, the installer claims that it needs an
additional 21.8 megs free space...  if I click ignore, the
installer proceeds (but boy, is it slow or what? ;-)

after installation (but before reboot) (reboot!?), I have
19.5 megs free.

hmm...

after uninstalling, I have 40.7 megs free.  there's still
some crud in the Python20\Tools\idle directory.

after removing that stuff, I have 40.8 megs free.

close enough ;-)

on a second run, it claims that I have 21.3 megs free, and
that the installer needs another 22.8 megs to complete in-
stallation.

:::

without rebooting, IDLE refuses to start, but the console
window works fine...

</F>



From martin@loewis.home.cs.tu-berlin.de  Tue Sep 26 21:34:41 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 26 Sep 2000 22:34:41 +0200
Subject: [Python-Dev] Bogus SAX test case
Message-ID: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>

test_sax.py has the test case test_xmlgen_ns, which reads

ns_uri = "http://www.python.org/xml-ns/saxtest/"

    gen.startDocument()
    gen.startPrefixMapping("ns1", ns_uri)
    gen.startElementNS((ns_uri, "doc"), "ns:doc", {})
    gen.endElementNS((ns_uri, "doc"), "ns:doc")
    gen.endPrefixMapping("ns1")
    gen.endDocument()

Translating that to XML, it should look like

<?xml version="1.0" encoding="iso-8859-1"?>
<ns:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"><ns:doc/>

(or, alternatively, the element could just be empty). Is that the XML
that would produce above sequence of SAX events?

It seems to me that this XML is ill-formed, the namespace prefix ns is
not defined here. Is that analysis correct? Furthermore, the test
checks whether the generator produces

<?xml version="1.0" encoding="iso-8859-1"?>
<ns1:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"></ns1:doc>

It appears that the expected output is bogus; I'd rather expect to get
the original document back.

I noticed this because in PyXML, XMLGenerator *would* produce ns:doc
on output, so the test case broke. I have now changed PyXML to follow
Python 2.0b2 here.

My proposal would be to correct the test case to pass "ns1:doc" as the
qname, and to correct the generator to output the qname if that was
provided by the reader.

Comments?

Regards,
Martin


From Fredrik Lundh" <effbot@telia.com  Tue Sep 26 21:57:11 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:57:11 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid> <000001c027f7$e0915480$766940d5@hagrid> <005901c027fb$2ecf8380$766940d5@hagrid>
Message-ID: <000a01c027fc$6942c800$766940d5@hagrid>

I wrote:
> without rebooting, IDLE refuses to start, but the console
> window works fine...

fwiw, rebooting didn't help.

</F>



From thomas@xs4all.net  Tue Sep 26 21:51:47 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 26 Sep 2000 22:51:47 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 03:43:42PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
Message-ID: <20000926225146.L20757@xs4all.nl>

On Tue, Sep 26, 2000 at 03:43:42PM -0400, Fred L. Drake, Jr. wrote:

>   The failing tests are for fcntl, openpty, and pty.  Here's the
> output of regrtest -v for those tests:

> bash$ ./python -tt ../Lib/test/regrtest.py -v test_{fcntl,openpty,pty}
> test_fcntl
> test_fcntl
> Status from fnctl with O_NONBLOCK:  0
> struct.pack:  '\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'test test_fcntl crashed -- exceptions.IOError: [Errno 37] No locks available
> Traceback (most recent call last):
>   File "../Lib/test/regrtest.py", line 235, in runtest
>     __import__(test, globals(), locals(), [])
>   File "../Lib/test/test_fcntl.py", line 31, in ?
>     rv = fcntl.fcntl(f.fileno(), FCNTL.F_SETLKW, lockdata)
> IOError: [Errno 37] No locks available

Looks like your /tmp directory doesn't support locks. Perhaps it's some kind
of RAMdisk ? See if you can find a 'normal' filesystem (preferably not NFS)
where you have write-permission, and change the /tmp/delete-me path in
test_fcntl to that.

> test_openpty
> test_openpty
> Calling os.openpty()
> test test_openpty crashed -- exceptions.OSError: [Errno 2] No such file or directory
> Traceback (most recent call last):
>   File "../Lib/test/regrtest.py", line 235, in runtest
>     __import__(test, globals(), locals(), [])
>   File "../Lib/test/test_openpty.py", line 9, in ?
>     master, slave = os.openpty()
> OSError: [Errno 2] No such file or directory

If you're running glibc (which is pretty likely, because IIRC libc5 didn't
have an openpty() call, so test_openpty should be skipped) openpty() is
defined as a library routine that tries to open /dev/ptmx. That's the kernel
support for Unix98 pty's. However, it's possible that support is turned off
in the default Caldera kernel, or perhaps /dev/ptmx does not exist (what
kernel are you running, btw ?) /dev/ptmx was new in 2.1.x, so if you're
running 2.0 kernels, that might be the problem.

I'm not sure if you're supposed to get that error, though. I've never tested
glibc's openpty() support on a system that had it turned off, though I have
seen *almost* exactly the same error message from BSDI's openpty() call,
which works by sequentially trying to open each pty, until it finds one that
works. 

> test_pty
> test_pty
> Calling master_open()
> Got master_fd '5', slave_name '/dev/ttyp0'
> Calling slave_open('/dev/ttyp0')
> test test_pty skipped --  Pseudo-terminals (seemingly) not functional.
> 2 tests failed: test_fcntl test_openpty
> 1 test skipped: test_pty

The 'normal' procedure for opening pty's is to open the master, and if that
works, the pty is functional... But it looks like you could open the master,
but not the slave. Possibly permission problems, or a messed up /dev
directory. Do you know if /dev/ttyp0 was in use while you were running the
test ? (it's pretty likely it was, since it's usually the first pty on the
search list.) What might be happening here is that the master is openable,
for some reason, even if the pty/tty pair is already in use, but the slave
isn't openable. That would mean that the pty library is basically
nonfunctional, on those platforms, and it's definately not the behaviour
I've seen on other platforms :P And this wouldn't be a new thing, because
the pty module has always worked this way.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Tue Sep 26 21:56:33 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:56:33 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <005901c027fb$2ecf8380$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDLHIAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> just one weird thing:
>
> according to dir, I have 41 megs on my C: disk before
> running the installer...
>
> according to the installer, I have 22.3 megs,

This is the Wise "Check free disk space" "Script item".  Now you know as
much about it as I do <wink>.

> but Python only requires 18.3 megs, so it should be okay...

Noting that 22.3 + 18.3 ~= 41.  So it sounds like Wise's "Disk space
remaining" is trying to tell you how much space you'll have left *after* the
install.  Indeed, if you try unchecking various items in the "Select
Components" dialog, you should see that the "Disk space remaining" changes
accordingly.

> but a little later, the installer claims that it needs an
> additional 21.8 megs free space...  if I click ignore, the
> installer proceeds (but boy, is it slow or what? ;-)

Win95?  Which version?  The installer runs very quickly for me (Win98).
I've never tried it without plenty of free disk space, though; maybe it
needs temp space for unpacking?  Dunno.

> after installation (but before reboot) (reboot!?), I have
> 19.5 megs free.

It's unclear here whether the installer did or did not *say* it wanted you
to reboot.  It should ask for a reboot if and only if it needs to update an
MS shared DLL (the installer ships with MSVCRT.DLL and MSCVICRT.DLL).

> hmm...
>
> after uninstalling, I have 40.7 megs free.  there's still
> some crud in the Python20\Tools\idle directory.

Like what?  .pyc files, perhaps?  Like most uninstallers, it will not delete
files it didn't install, so all .pyc files (or anything else) generated
after the install won't be touched.

> after removing that stuff, I have 40.8 megs free.
>
> close enough ;-)
>
> on a second run, it claims that I have 21.3 megs free, and
> that the installer needs another 22.8 megs to complete in-
> stallation.

Noted.

> without rebooting, IDLE refuses to start, but the console
> window works fine...

If it told you to reboot and you didn't, I don't really care what happens if
you ignore the instructions <wink>.  Does IDLE start after you reboot?

thanks-for-the-pain!-ly y'rs  - tim




From tim_one@email.msn.com  Tue Sep 26 22:02:14 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 17:02:14 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <000a01c027fc$6942c800$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com>

[/F]
> I wrote:
> > without rebooting, IDLE refuses to start, but the console
> > window works fine...
>
> fwiw, rebooting didn't help.

So let's start playing bug report:  Which version of Windows?  By what means
did you attempt to start IDLE?  What does "refuses to start" mean (error
msg, system freeze, hourglass that never goes away, pops up & vanishes,
nothing visible happens at all, ...)?  Does Tkinter._test() work from a
DOS-box Python?  Do you have magical Tcl/Tk envars set for your own
development work?  Stuff like that.




From Fredrik Lundh" <effbot@telia.com  Tue Sep 26 22:30:09 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 23:30:09 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com>
Message-ID: <001b01c02800$f3996000$766940d5@hagrid>

tim wrote,
> > fwiw, rebooting didn't help.

> So let's start playing bug report:

oh, I've figured it out (what did you expect ;-). read on.

> Which version of Windows?

Windows 95 OSR 2.

> By what means did you attempt to start IDLE?

> What does "refuses to start" mean (error msg, system freeze,
> hourglass that never goes away, pops up & vanishes, nothing
> visible happens at all, ...)?

idle never appears.

> Does Tkinter._test() work from a DOS-box Python?

yes -- but it hangs if I close it with the "x" button (same
problem as I've reported earlier).

> Do you have magical Tcl/Tk envars set for your own
> development work?

bingo!

(a global PYTHONPATH setting also resulted in some interesting
behaviour... on my wishlist for 2.1: an option telling Python to
ignore all PYTHON* environment variables...)

</F>



From tim_one@email.msn.com  Tue Sep 26 22:50:54 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 17:50:54 -0400
Subject: [Python-Dev] Crisis aversive
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>

I'm going to take a nap now.  If there's a Windows crisis for the duration,
mail pleas for urgent assistance to bwarsaw@beopen.com -- especially if it
involves interactions between a Python script running as an NT service and
python-mode.el under NT Emacs.  Barry *loves* those!

Back online in a few hours.

sometimes-when-you-hit-the-wall-you-stick-ly y'rs  - tim




From fdrake@beopen.com  Tue Sep 26 22:50:16 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 17:50:16 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <20000926141610.A6557@keymaster.enme.ucalgary.ca>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
 <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
 <14801.383.799094.8428@cj42289-a.reston1.va.home.com>
 <20000926141610.A6557@keymaster.enme.ucalgary.ca>
Message-ID: <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>

Neil Schemenauer writes:
 > I don't know much but having the output from "uname -a" and "ldd python"
 > could be helpful (ie. which kernel and which libc).

Under SuSE 6.3, uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001d000)
	libpthread.so.0 => /lib/libpthread.so.0 (0x4005c000)
	libdl.so.2 => /lib/libdl.so.2 (0x4006e000)
	libutil.so.1 => /lib/libutil.so.1 (0x40071000)
	libm.so.6 => /lib/libm.so.6 (0x40075000)
	libc.so.6 => /lib/libc.so.6 (0x40092000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

Under Caldera 2.3, uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001a000)
	libpthread.so.0 => /lib/libpthread.so.0 (0x40055000)
	libdl.so.2 => /lib/libdl.so.2 (0x40066000)
	libutil.so.1 => /lib/libutil.so.1 (0x4006a000)
	libm.so.6 => /lib/libm.so.6 (0x4006d000)
	libc.so.6 => /lib/libc.so.6 (0x4008a000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

  Now, it may be that something strange is going on since these are
the "virtual environments" on SourceForge.  I'm not sure these are
really the same thing as running those systems.  I'm looking at the
script to start SuSE; there's nothing really there but a chroot call;
perhaps there's a kernel/library mismatch?
  I'll have to see ask about how these are supposed to work a little
more; kernel/libc mismatches could be a real problem in this
environment.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Tue Sep 26 22:52:59 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 17:52:59 -0400 (EDT)
Subject: [Python-Dev] Crisis aversive
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
Message-ID: <14801.6843.516029.921562@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > sometimes-when-you-hit-the-wall-you-stick-ly y'rs  - tim

  I told you to take off that Velco body armor!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw@beopen.com  Tue Sep 26 22:57:22 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 26 Sep 2000 17:57:22 -0400 (EDT)
Subject: [Python-Dev] Crisis aversive
References: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
Message-ID: <14801.7106.388711.967339@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> I'm going to take a nap now.  If there's a Windows crisis for
    TP> the duration, mail pleas for urgent assistance to
    TP> bwarsaw@beopen.com -- especially if it involves interactions
    TP> between a Python script running as an NT service and
    TP> python-mode.el under NT Emacs.  Barry *loves* those!

Indeed!  I especially love these because I don't have a working
Windows system at the moment, so every such bug just gets classified
as non-reproducible.

or-"works-for-me"-about-as-well-as-if-i-did-have-windows-ly y'rs,
-Barry


From tommy@ilm.com  Tue Sep 26 23:55:02 2000
From: tommy@ilm.com (Victor the Cleaner)
Date: Tue, 26 Sep 2000 15:55:02 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <14801.10496.986326.537462@mace.lucasdigital.com>

Hi All,

Jeremy asked me to send this report (which I originally sent just to
him) along to the rest of python-dev, so here ya go:

------------%< snip %<----------------------%< snip %<------------

Hey Jeremy,

Configured (--without-gcc), made and ran just fine on my IRIX6.5 O2.
The "make test" output indicated a lot of skipped modules since I
didn't do any Setup.in modifications before making everything, and the 
only error came from test_unicodedata:

test test_unicodedata failed -- Writing: 'e052289ecef97fc89c794cf663cb74a64631d34e', expected: 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'

Nothing else that ran had any errors.  Here's the final output:

77 tests OK.
1 test failed: test_unicodedata
24 tests skipped: test_al test_audioop test_cd test_cl test_crypt test_dbm test_dl test_gdbm test_gl test_gzip test_imageop test_imgfile test_linuxaudiodev test_minidom test_nis test_pty test_pyexpat test_rgbimg test_sax test_sunaudiodev test_timing test_winreg test_winsound test_zlib

is there anything I can do to help debug the unicodedata failure?

------------%< snip %<----------------------%< snip %<------------

Jeremy Hylton writes:
| We have tar balls and RPMs available on our private FTP site,
| python.beopen.com.  If you have a chance to test these on your
| platform in the next couple of hours, feedback would be appreciated.
| We've tested on FreeBSD and RH and Mandrake Linux.
| 
| What we're most interested in hearing about is whether it builds
| cleanly and runs the regression test.
| 
| The actual release will occur later today from pythonlabs.com.
| 
| Jeremy
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev@python.org
| http://www.python.org/mailman/listinfo/python-dev


From jeremy@beopen.com  Wed Sep 27 00:07:03 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 19:07:03 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.10496.986326.537462@mace.lucasdigital.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
 <14801.10496.986326.537462@mace.lucasdigital.com>
Message-ID: <14801.11287.963056.896941@bitdiddle.concentric.net>

I was just talking with Guido who wondered if it might simply be an
optmizer bug with the IRIX compiler.  Does the same problem occur with
optimization turned off?

Jeremy


From tommy@ilm.com  Wed Sep 27 01:01:54 2000
From: tommy@ilm.com (Victor the Cleaner)
Date: Tue, 26 Sep 2000 17:01:54 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.11287.963056.896941@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
 <14801.10496.986326.537462@mace.lucasdigital.com>
 <14801.11287.963056.896941@bitdiddle.concentric.net>
Message-ID: <14801.14476.284150.194816@mace.lucasdigital.com>

yes, it does.  I changed this line in the toplevel Makefile:

OPT =	-O -OPT:Olimit=0

to

OPT =

and saw no optimization going on during compiling (yes, I made clean
first) but I got the exact same result from test_unicodedata.


Jeremy Hylton writes:
| I was just talking with Guido who wondered if it might simply be an
| optmizer bug with the IRIX compiler.  Does the same problem occur with
| optimization turned off?
| 
| Jeremy


From gward@python.net  Wed Sep 27 01:11:07 2000
From: gward@python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:11:07 -0400
Subject: [Python-Dev] Stupid distutils bug
Message-ID: <20000926201107.A1179@beelzebub>

No, I mean *really* stupid.  So stupid that I nearly fell out of my
chair with embarassment when I saw Thomas Heller's report of it, because
I released Distutils 0.9.3 *before* reading my mail.  D'oh!

Anyways, this is such a colossally stupid bug that I'm *glad* 2.0b2
hasn't gone out yet: it gives me a chance to checkin the (3-line) fix.
Here's what I plan to do:
  * tag distutils-0_9_3 (ie. last bit of bureaucracy for the
    broken, about-to-be-superseded release)
  * checkin my fix
  * release Distutils 0.9.4 (with this 3-line fix and *nothing* more)
  * tag distutils-0_9_4
  * calmly sit back and wait for Jeremy and Tim to flay me alive

Egg-on-face, paper-bag-on-head, etc. etc...

        Greg

PS. be sure to cc me: I'm doing this from home, but my python-dev
subscription goes to work.

-- 
Greg Ward                                      gward@python.net
http://starship.python.net/~gward/


From jeremy@beopen.com  Wed Sep 27 01:25:53 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 20:25:53 -0400 (EDT)
Subject: [Python-Dev] Stupid distutils bug
In-Reply-To: <20000926201107.A1179@beelzebub>
References: <20000926201107.A1179@beelzebub>
Message-ID: <14801.16017.841176.232036@bitdiddle.concentric.net>

Greg,

The distribution tarball was cut this afternoon around 2pm.  It's way
to late to change anything in it.  Sorry.

Jeremy


From gward@python.net  Wed Sep 27 01:22:32 2000
From: gward@python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:22:32 -0400
Subject: [Python-Dev] Stupid distutils bug
In-Reply-To: <14801.16017.841176.232036@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Sep 26, 2000 at 08:25:53PM -0400
References: <20000926201107.A1179@beelzebub> <14801.16017.841176.232036@bitdiddle.concentric.net>
Message-ID: <20000926202232.D975@beelzebub>

On 26 September 2000, Jeremy Hylton said:
> The distribution tarball was cut this afternoon around 2pm.  It's way
> to late to change anything in it.  Sorry.

!@$!#!  I didn't see anything on python.org or pythonlabs.com, so I
assumed it wasn't done yet.  Oh well, Distutils 0.9.4 will go out
shortly anyways.  I'll just go off in a corner and castigate myself
mercilessly.  Arghgh!

        Greg
-- 
Greg Ward                                      gward@python.net
http://starship.python.net/~gward/


From jeremy@beopen.com  Wed Sep 27 01:33:22 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 20:33:22 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.14476.284150.194816@mace.lucasdigital.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
 <14801.10496.986326.537462@mace.lucasdigital.com>
 <14801.11287.963056.896941@bitdiddle.concentric.net>
 <14801.14476.284150.194816@mace.lucasdigital.com>
Message-ID: <14801.16466.928385.529906@bitdiddle.concentric.net>

Sounded too easy, didn't it?  We'll just have to wait for MAL or /F to
followup.

Jeremy


From tim_one@email.msn.com  Wed Sep 27 01:34:51 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 20:34:51 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.10496.986326.537462@mace.lucasdigital.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>

[Victor the Cleaner]
> Jeremy asked me to send this report (which I originally sent just to
> him) along to the rest of python-dev, so here ya go:

Bugs reports should go to SourceForge, else as often as not they'll got
lost.

> ------------%< snip %<----------------------%< snip %<------------
>
> Hey Jeremy,
>
> Configured (--without-gcc), made and ran just fine on my IRIX6.5 O2.
> The "make test" output indicated a lot of skipped modules since I
> didn't do any Setup.in modifications before making everything, and the
> only error came from test_unicodedata:
>
> test test_unicodedata failed -- Writing:
> 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'

The problem appears to be that the test uses the secret "unicode-internal"
encoding, which is dependent upon the big/little-endianess of your platform.
I can reproduce your flawed hash exactly on my platform by replacing this
line:

        h.update(u''.join(data).encode('unicode-internal'))

in test_unicodedata.py's test_methods() with this block:

        import array
        xxx = array.array("H", map(ord, u''.join(data)))
        xxx.byteswap()
        h.update(xxx)

When you do this from a shell:

>>> u"A".encode("unicode-internal")
'A\000'
>>>

I bet you get

'\000A'

Right?




From tim_one@email.msn.com  Wed Sep 27 01:39:49 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 20:39:49 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.16466.928385.529906@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEEHHIAA.tim_one@email.msn.com>

> Sounded too easy, didn't it?

Not at all:  an optimization bug on SGI is the *usual* outcome <0.5 wink>!

> We'll just have to wait for MAL or /F to followup.

See my earlier mail; the cause is thoroughly understood; it actually means
Unicode is working fine on his machine; but I don't know enough about
Unicode encodings to know how to rewrite the test in a portable way.




From akuchlin@mems-exchange.org  Wed Sep 27 01:43:24 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Tue, 26 Sep 2000 20:43:24 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <001b01c02800$f3996000$766940d5@hagrid>; from effbot@telia.com on Tue, Sep 26, 2000 at 11:30:09PM +0200
References: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com> <001b01c02800$f3996000$766940d5@hagrid>
Message-ID: <20000926204324.A20476@newcnri.cnri.reston.va.us>

On Tue, Sep 26, 2000 at 11:30:09PM +0200, Fredrik Lundh wrote:
>on my wishlist for 2.1: an option telling Python to
>ignore all PYTHON* environment variables...)

You could just add an environment variable that did this... dohhh!

--am"Raymound Smullyan"k



From greg@cosc.canterbury.ac.nz  Wed Sep 27 01:51:05 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 27 Sep 2000 12:51:05 +1200 (NZST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <Pine.SOL.4.21.0009261309240.22922-100000@yellow.csi.cam.ac.uk>
Message-ID: <200009270051.MAA23788@s454.cosc.canterbury.ac.nz>

By the way, one of the examples that comes with my
Plex module is an almost-complete Python scanner.
Just thought I'd mention it in case it would help
anyone.

http://www.cosc.canterbury.ac.nz/~greg/python/Plex

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From gward@python.net  Wed Sep 27 01:53:12 2000
From: gward@python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:53:12 -0400
Subject: [Python-Dev] Distutils 1.0 code freeze: Oct 1
Message-ID: <20000926205312.A1470@beelzebub>

Considering the following schedule of events:

  Oct  4: I go out of town (away from email, off the net, etc.)
  Oct 10: planned release of Python 2.0
  Oct 12: I'm back in town, ready to hack! (and wondering why it's
          so quiet around here...)

the Distutils 1.0 release will go out October 1 or 2.  I don't need
quite as much code freeze time as the full Python release, but let's put 
it this way: if there are features you want added to the Distutils that
I don't already know about, forget about it.  Changes currently under
consideration:

  * Rene Liebscher's rearrangement of the CCompiler classes; most
    of this is just reducing the amount of code, but it does
    add some minor features, so it's under consideration.

  * making byte-compilation more flexible: should be able to
    generate both .pyc and .pyo files, and should be able to
    do it at build time or install time (developer's and packager's
    discretion)

If you know about any outstanding Distutils bugs, please tell me *now*.
Put 'em in the SourceForge bug database if you're wondering why I
haven't fixed them yet -- they might have gotten lost, I might not know
about 'em, etc.  If you're not sure, put it in SourceForge.

Stuff that will definitely have to wait until after 1.0:

  * a "test" command (standard test framework for Python modules)

  * finishing the "config" command (auto-configuration)

  * installing package meta-data, to support "what *do* I have
    installed, anyways?" queries, uninstallation, upgrades, etc.

Blue-sky projects:

  * standard documentation processing

  * intra-module dependencies

        Greg
-- 
Greg Ward                                      gward@python.net
http://starship.python.net/~gward/


From dkwolfe@pacbell.net  Wed Sep 27 06:15:52 2000
From: dkwolfe@pacbell.net (Dan Wolfe)
Date: Tue, 26 Sep 2000 22:15:52 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1J00FEC58TA3@mta6.snfc21.pbi.net>

Hi Marc-Andre,

Regarding:

>You could try to enable the macro at the top of unicodectype.c:
> 
>#if defined(macintosh) || defined(MS_WIN64)
>/*XXX This was required to avoid a compiler error for an early Win64
> * cross-compiler that was used for the port to Win64. When the platform is
> * released the MS_WIN64 inclusion here should no longer be necessary.
> */
>/* This probably needs to be defined for some other compilers too. It 
>breaks the
>** 5000-label switch statement up into switches with around 1000 cases each.
>*/
>#define BREAK_SWITCH_UP return 1; } switch (ch) {
>#else
>#define BREAK_SWITCH_UP /* nothing */
>#endif

I've tested it with the BREAK_SWITCH_UP to be true and it fixes the 
problem - same as using the -traditional-cpp.  However, before we commit 
this change I need to see if they are planning on fixing it... remeber 
this Mac OS X is Beta software.... :-)

>If it does compile with the work-around enabled, please
>give us a set of defines which identify the compiler and
>platform so we can enable it per default for your setup.

Auto-make is making me nuts... it's a long way from a GUI for this poor 
old mac guy.  I'll see what I can do.. stay tuned. ;-)

- Dan


From tim_one@email.msn.com  Wed Sep 27 06:39:35 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 01:39:35 -0400
Subject: [Python-Dev] FW: regarding the Python Developer posting...
In-Reply-To: <0G1J00FEC58TA3@mta6.snfc21.pbi.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEFFHIAA.tim_one@email.msn.com>

[about the big switch in unicodectype.c]

Dan, I'll suggest again that you try working from the current CVS tree
instead.  The giant switch stmt doesn't even exist anymore!  Few developers
are going to volunteer their time to help with code that's already been
replaced.  Talk to Steven Majewski, too -- he's also keen to see this work
on Macs, and knows a lot about Python internals.




From dkwolfe@pacbell.net  Wed Sep 27 08:02:00 2000
From: dkwolfe@pacbell.net (Dan Wolfe)
Date: Wed, 27 Sep 2000 00:02:00 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1J0028SA6KS4@mta5.snfc21.pbi.net>

>>[about the big switch in unicodectype.c]
>
>[Tim: use the current CVS tree instead... code's been replace...]

duh! gotta read them archives before seeing following up on an request... 
can't trust the hyper-active Python development team with a code 
freeze.... <wink>

I'm happy to report that it now compiles correctly without a 
-traditional-cpp flag.

Unfortuantely, test_re.py now seg faults.... which is caused by 
test_sre.py... in particular the following:

src/Lib/test/test_sre.py

if verbose:
    print 'Test engine limitations'

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
#test(r"""sre.match(r'(x)*', 50000*'x').span()""",
#   (0, 50000), RuntimeError)
#test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)
#test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)


test_unicodedata fails... same endian problem as SGI...
test_format fails... looks like a problem with the underlying C code.

Here's the config instructions for Mac OS X Public Beta:

Building Python 2.0b1 + CVS
9/26/2000
Dan Wolfe

./configure -with-threads -with-dyld -with-suffix=.exe

change in src/config.h:

/* Define if you have POSIX threads */
#define _POSIX_THREADS 1

to 

/* #define _POSIX_THREADS 1 */

change in src/Makefile

# Compiler options passed to subordinate makes
OPT=		-g -O2 -OPT:Olimit=0

to

OPT=		-g -O2

comment out the following in src/Lib/test/test_sre.py

if verbose:
    print 'Test engine limitations'

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
#test(r"""sre.match(r'(x)*', 50000*'x').span()""",
#   (0, 50000), RuntimeError)
#test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)
#test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)


After install, manually go into /usr/local/bin strip the .exe off the 
installed files.


- Dan





From trentm@ActiveState.com  Wed Sep 27 08:32:33 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Wed, 27 Sep 2000 00:32:33 -0700
Subject: [Python-Dev] WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <200009270706.AAA21107@slayer.i.sourceforge.net>; from tmick@users.sourceforge.net on Wed, Sep 27, 2000 at 12:06:06AM -0700
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
Message-ID: <20000927003233.C19872@ActiveState.com>

I was playing with a different SourceForge project and I screwed up my
CVSROOT (used Python's instead). Sorry SOrry!

How do I undo this cleanly? I could 'cvs remove' the README.txt file but that
would still leave the top-level 'black/' turd right? Do the SourceForge admin
guys have to manually kill the 'black' directory in the repository?


or-failing-that-can-my--pet-project-make-it-into-python-2.0-<weak-smile>-ly
yours,
Trent



On Wed, Sep 27, 2000 at 12:06:06AM -0700, Trent Mick wrote:
> Update of /cvsroot/python/black
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv20977
> 
> Log Message:
> first import into CVS
> 
> Status:
> 
> Vendor Tag:	vendor
> Release Tags:	start
> 		
> N black/README.txt
> 
> No conflicts created by this import
> 
> 
> ***** Bogus filespec: -
> ***** Bogus filespec: Imported
> ***** Bogus filespec: sources
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins@python.org
> http://www.python.org/mailman/listinfo/python-checkins

-- 
Trent Mick
TrentM@ActiveState.com


From Fredrik Lundh" <effbot@telia.com  Wed Sep 27 09:06:44 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 10:06:44 +0200
Subject: [Python-Dev] FW: regarding the Python Developer posting...
References: <0G1J0028SA6KS4@mta5.snfc21.pbi.net>
Message-ID: <000c01c02859$e1502420$766940d5@hagrid>

dan wrote:
> >[Tim: use the current CVS tree instead... code's been replace...]
> 
> duh! gotta read them archives before seeing following up on an request... 
> can't trust the hyper-active Python development team with a code 
> freeze.... <wink>

heh.  your bug report was the main reason for getting this change
into 2.0b2, and we completely forgot to tell you about it...

> Unfortuantely, test_re.py now seg faults.... which is caused by 
> test_sre.py... in particular the following:
> 
> src/Lib/test/test_sre.py
> 
> if verbose:
>     print 'Test engine limitations'
> 
> # Try nasty case that overflows the straightforward recursive
> # implementation of repeated groups.
> #test(r"""sre.match(r'(x)*', 50000*'x').span()""",
> #   (0, 50000), RuntimeError)
> #test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
> #     (0, 50001), RuntimeError)
> #test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
> #     (0, 50001), RuntimeError)

umm.  I assume it bombs if you uncomment those lines, right?

you could try adding a Mac OS clause to the recursion limit stuff
in Modules/_sre.c:

#if !defined(USE_STACKCHECK)
#if defined(...whatever's needed to detect Max OS X...)
#define USE_RECURSION_LIMIT 5000
#elif defined(MS_WIN64) || defined(__LP64__) || defined(_LP64)
/* require smaller recursion limit for a number of 64-bit platforms:
   Win64 (MS_WIN64), Linux64 (__LP64__), Monterey (64-bit AIX) (_LP64) */
/* FIXME: maybe the limit should be 40000 / sizeof(void*) ? */
#define USE_RECURSION_LIMIT 7500
#else
#define USE_RECURSION_LIMIT 10000
#endif
#endif

replace "...whatever...", and try larger values than 5000 (or smaller,
if necessary.  10000 is clearly too large for your platform).

(alternatively, you can increase the stack size.  maybe it's very small
by default?)

</F>



From larsga@garshol.priv.no  Wed Sep 27 09:12:45 2000
From: larsga@garshol.priv.no (Lars Marius Garshol)
Date: 27 Sep 2000 10:12:45 +0200
Subject: [Python-Dev] Bogus SAX test case
In-Reply-To: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>
References: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>
Message-ID: <m3hf72uubm.fsf@lambda.garshol.priv.no>

* Martin v. Loewis
| 
| <?xml version="1.0" encoding="iso-8859-1"?>
| <ns:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"><ns:doc/>
| 
| (or, alternatively, the element could just be empty). Is that the
| XML that would produce above sequence of SAX events?

Nope, it's not.  No XML document could produce that particular
sequence of events.
 
| It seems to me that this XML is ill-formed, the namespace prefix ns
| is not defined here. Is that analysis correct? 

Not entirely.  The XML is perfectly well-formed, but it's not
namespace-compliant.

| Furthermore, the test checks whether the generator produces
| 
| <?xml version="1.0" encoding="iso-8859-1"?>
| <ns1:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"></ns1:doc>
| 
| It appears that the expected output is bogus; I'd rather expect to get
| the original document back.

What original document? :-)
 
| My proposal would be to correct the test case to pass "ns1:doc" as
| the qname, 

I see that as being the best fix, and have now committed it.

| and to correct the generator to output the qname if that was
| provided by the reader.

We could do that, but the namespace name and the qname are supposed to
be equivalent in any case, so I don't see any reason to change it.
One problem with making that change is that it no longer becomes
possible to roundtrip XML -> pyexpat -> SAX -> xmlgen -> XML because
pyexpat does not provide qnames.

--Lars M.



From tim_one@email.msn.com  Wed Sep 27 09:45:57 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 04:45:57 -0400
Subject: [Python-Dev] 2.0b2 is ... released?
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>

The other guys are sleeping and I'm on vacation.  It *appears* that our West
Coast webmasters may have finished doing their thing, so pending Jeremy's
official announcement perhaps you'd just like to check it out:

    http://www.pythonlabs.com/products/python2.0/

I can't swear it's a release.  *Looks* like one, though!




From fredrik@pythonware.com  Wed Sep 27 10:00:34 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 11:00:34 +0200
Subject: [Python-Dev] 2.0b2 is ... released?
References: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>
Message-ID: <016201c02861$66aee2d0$0900a8c0@SPIFF>


> The other guys are sleeping and I'm on vacation.  It *appears* that our
West
> Coast webmasters may have finished doing their thing, so pending Jeremy's
> official announcement perhaps you'd just like to check it out:
>
>     http://www.pythonlabs.com/products/python2.0/
>
> I can't swear it's a release.  *Looks* like one, though!

the daily URL says so too:

    http://www.pythonware.com/daily/

(but even though we removed some 2.5 megs of unicode stuff,
the new tarball is nearly as large as the previous one.  less filling,
more taste?)

</F>



From fredrik@pythonware.com  Wed Sep 27 10:08:04 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 11:08:04 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
Message-ID: <018401c02862$72311820$0900a8c0@SPIFF>

tim wrote:
> > test test_unicodedata failed -- Writing:
> > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
>
> The problem appears to be that the test uses the secret "unicode-internal"
> encoding, which is dependent upon the big/little-endianess of your
platform.

my fault -- when I saw that, I asked myself "why the heck doesn't mal
just use repr, like I did?" and decided that he used "unicode-escape"
was make to sure the test didn't break if the repr encoding changed.

too bad my brain didn't trust my eyes...

> I can reproduce your flawed hash exactly on my platform by replacing this
> line:
>
>         h.update(u''.join(data).encode('unicode-internal'))

I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
as
anything can be...)

</F>



From tim_one@email.msn.com  Wed Sep 27 10:19:03 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 05:19:03 -0400
Subject: [Python-Dev] 2.0b2 is ... released?
In-Reply-To: <016201c02861$66aee2d0$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFLHIAA.tim_one@email.msn.com>

>> The other guys are sleeping and I'm on vacation.  It *appears* that our
>> West Coast webmasters may have finished doing their thing, so
>> pending Jeremy's official announcement perhaps you'd just like to
>> check it out:
>>
>>     http://www.pythonlabs.com/products/python2.0/
>>
>> I can't swear it's a release.  *Looks* like one, though!

[/F]
> the daily URL says so too:
>
>     http://www.pythonware.com/daily/

Thanks, /F!  I'll *believe* it's a release if I can ever complete
downloading the Windows installer from that site.  S-l-o-w!

> (but even though we removed some 2.5 megs of unicode stuff,
> the new tarball is nearly as large as the previous one.  less filling,
> more taste?)

Heh, I expected *that* one:  the fact that the Unicode stuff was highly
compressible wasn't lost on gzip either.  The Windows installer shrunk less
than 10%, and that includes savings also due to (a) not shipping two full
copies of Lib/ anymore (looked like an ancient stray duplicate line in the
installer script), and (b) not shipping the debug .lib files anymore.
There's a much nicer savings after it's all unpacked, of course.

Hey!  Everyone check out the "what's new in 2.0b2" section!  This was an
incredible amount of good work in a 3-week period, and you should all be
proud of yourselves.  And *especially* proud if you actually helped <wink>.

if-you-just-got-in-the-way-we-love-you-too-ly y'rs  - tim




From mal@lemburg.com  Wed Sep 27 13:13:01 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 14:13:01 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <018401c02862$72311820$0900a8c0@SPIFF>
Message-ID: <39D1E44D.C7E080D@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > test test_unicodedata failed -- Writing:
> > > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
> >
> > The problem appears to be that the test uses the secret "unicode-internal"
> > encoding, which is dependent upon the big/little-endianess of your
> platform.
> 
> my fault -- when I saw that, I asked myself "why the heck doesn't mal
> just use repr, like I did?" and decided that he used "unicode-escape"
> was make to sure the test didn't break if the repr encoding changed.
> 
> too bad my brain didn't trust my eyes...

repr() would have been a bad choice since the past has shown
that repr() does change. I completely forgot about the endianness
which affects the hash value.
 
> > I can reproduce your flawed hash exactly on my platform by replacing this
> > line:
> >
> >         h.update(u''.join(data).encode('unicode-internal'))
> 
> I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
> as
> anything can be...)

I think UTF-8 will bring about problems with surrogates (that's
why I used the unicode-internal codec). I haven't checked this
though... I'll fix this ASAP.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Wed Sep 27 13:19:42 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 27 Sep 2000 14:19:42 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 05:50:16PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com> <14801.383.799094.8428@cj42289-a.reston1.va.home.com> <20000926141610.A6557@keymaster.enme.ucalgary.ca> <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>
Message-ID: <20000927141942.M20757@xs4all.nl>

On Tue, Sep 26, 2000 at 05:50:16PM -0400, Fred L. Drake, Jr. wrote:

[ test_fcntl, test_pty and test_openpty failing on SuSe & Caldera Linux ]

>   Now, it may be that something strange is going on since these are
> the "virtual environments" on SourceForge.  I'm not sure these are
> really the same thing as running those systems.  I'm looking at the
> script to start SuSE; there's nothing really there but a chroot call;
> perhaps there's a kernel/library mismatch?

Nope, you almost got it. You were so close, too! It's not a kernel/library
thing, it's the chroot call ;) I'm *guessing* here, but it looks like you
get a faked privileged shell in a chrooted environment, which isn't actualy
privileged (kind of like the FreeBSD 'jail' thing.) It doesn't suprise me
one bit that it fails on those three tests. In fact, I'm (delightedly)
suprised that it didn't fail more tests! But these three require some
close interaction between the kernel, the libc, and the filesystem (instead
of just kernel/fs, libc/fs or kernel/libc.)

It could be anything: security-checks on owner/mode in the kernel,
security-checks on same in libc, or perhaps something sees the chroot and
decides that deception is not going to work in this case. If Sourceforge is
serious about this virtual environment service they probably do want to know
about this, though. I'll see if I can get my SuSe-loving colleague to
compile&test Python on his box, and if that works alright, I think we can
safely claim this is a Sourceforge bug, not a Python one. I don't know
anyone using Caldera, though.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Wed Sep 27 13:20:30 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 14:20:30 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <018401c02862$72311820$0900a8c0@SPIFF> <39D1E44D.C7E080D@lemburg.com>
Message-ID: <39D1E60E.95E04302@lemburg.com>

"M.-A. Lemburg" wrote:
> 
> Fredrik Lundh wrote:
> >
> > tim wrote:
> > > > test test_unicodedata failed -- Writing:
> > > > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > > > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
> > >
> > > The problem appears to be that the test uses the secret "unicode-internal"
> > > encoding, which is dependent upon the big/little-endianess of your
> > platform.
> >
> > my fault -- when I saw that, I asked myself "why the heck doesn't mal
> > just use repr, like I did?" and decided that he used "unicode-escape"
> > was make to sure the test didn't break if the repr encoding changed.
> >
> > too bad my brain didn't trust my eyes...
> 
> repr() would have been a bad choice since the past has shown
> that repr() does change. I completely forgot about the endianness
> which affects the hash value.
> 
> > > I can reproduce your flawed hash exactly on my platform by replacing this
> > > line:
> > >
> > >         h.update(u''.join(data).encode('unicode-internal'))
> >
> > I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
> > as
> > anything can be...)
> 
> I think UTF-8 will bring about problems with surrogates (that's
> why I used the unicode-internal codec). I haven't checked this
> though... I'll fix this ASAP.

UTF-8 works for me. I'll check in a patch.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fdrake@beopen.com  Wed Sep 27 14:22:56 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 27 Sep 2000 09:22:56 -0400 (EDT)
Subject: [Python-Dev] 2.0b2 is ... released?
In-Reply-To: <016201c02861$66aee2d0$0900a8c0@SPIFF>
References: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>
 <016201c02861$66aee2d0$0900a8c0@SPIFF>
Message-ID: <14801.62640.276852.209527@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > (but even though we removed some 2.5 megs of unicode stuff,
 > the new tarball is nearly as large as the previous one.  less filling,
 > more taste?)

  Umm... Zesty!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From jeremy@beopen.com  Wed Sep 27 17:04:36 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 12:04:36 -0400 (EDT)
Subject: [Python-Dev] Python 2.0b2 is released!
Message-ID: <14802.6804.717866.176697@bitdiddle.concentric.net>

Python 2.0b2 is released.  The BeOpen PythonLabs and our cast of
SourceForge volunteers have fixed many bugs since the 2.0b1 release
three weeks ago.  Please go here to pick up the new release:

    http://www.pythonlabs.com/tech/python2.0/

There's a tarball, a Windows installer, RedHat RPMs, online
documentation, and a long list of fixed bugs.

The final release of Python 2.0 is expected in early- to mid-October.
We would appreciate feedback on the current beta release in order to
fix any remaining bugs before the final release.  Confirmation of
build and test success on less common platforms is also helpful.

Python 2.0 has many new features, including the following:

  - Augmented assignment, e.g. x += 1
  - List comprehensions, e.g. [x**2 for x in range(10)]
  - Extended import statement, e.g. import Module as Name
  - Extended print statement, e.g. print >> file, "Hello"
  - Optional collection of cyclical garbage

This release fixes many known bugs.  The list of open bugs has dropped
to 50, and more than 100 bug reports have been resolved since Python
1.6.  To report a new bug, use the SourceForge bug tracker
http://sourceforge.net/bugs/?func=addbug&group_id=5470

-- Jeremy Hylton <http://www.python.org/~jeremy/>



From jeremy@beopen.com  Wed Sep 27 17:31:35 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 12:31:35 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0b2 is released!
In-Reply-To: <14802.6804.717866.176697@bitdiddle.concentric.net>
References: <14802.6804.717866.176697@bitdiddle.concentric.net>
Message-ID: <14802.8423.701972.950382@bitdiddle.concentric.net>

The correct URL for the Python 2.0b2 release is:
    http://www.pythonlabs.com/products/python2.0/

-- Jeremy Hylton <http://www.python.org/~jeremy/>


From tommy@ilm.com  Wed Sep 27 18:26:53 2000
From: tommy@ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 10:26:53 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
References: <14801.10496.986326.537462@mace.lucasdigital.com>
 <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
Message-ID: <14802.11605.281385.45283@mace.lucasdigital.com>

Tim Peters writes:
| [Victor the Cleaner]
| > Jeremy asked me to send this report (which I originally sent just to
| > him) along to the rest of python-dev, so here ya go:
| 
| Bugs reports should go to SourceForge, else as often as not they'll got
| lost.

Sorry, this wasn't intended to be bug report (not yet, at least).
Jeremy asked for feedback on the release, and that's all I was trying
to give. 


| When you do this from a shell:
| 
| >>> u"A".encode("unicode-internal")
| 'A\000'
| >>>
| 
| I bet you get
| 
| '\000A'
| 
| Right?

Right, as usual. :)  Sounds like MAL already has this one fixed,
too... 


From martin@loewis.home.cs.tu-berlin.de  Wed Sep 27 19:36:04 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 27 Sep 2000 20:36:04 +0200
Subject: [XML-SIG] Re: [Python-Dev] Bogus SAX test case
In-Reply-To: <m3hf72uubm.fsf@lambda.garshol.priv.no> (message from Lars Marius
 Garshol on 27 Sep 2000 10:12:45 +0200)
References: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de> <m3hf72uubm.fsf@lambda.garshol.priv.no>
Message-ID: <200009271836.UAA00872@loewis.home.cs.tu-berlin.de>

> | My proposal would be to correct the test case to pass "ns1:doc" as
> | the qname, 
> 
> I see that as being the best fix, and have now committed it.

Thanks!

> | and to correct the generator to output the qname if that was
> | provided by the reader.
> 
> We could do that, but the namespace name and the qname are supposed to
> be equivalent in any case, so I don't see any reason to change it.

What about

<foo xmlns:mine="martin:von.loewis">
  <bar xmlns:meiner="martin:von.loewis">
    <mine:foobar/>
    <meiner:foobar/>
  </bar>
</foo>

In that case, one of the qnames will change on output when your
algorithm is used - even if the parser provided the original names. By
the way, when parsing this text via

import xml.sax,xml.sax.handler,xml.sax.saxutils,StringIO
p=xml.sax.make_parser()
p.setContentHandler(xml.sax.saxutils.XMLGenerator())
p.setFeature(xml.sax.handler.feature_namespaces,1)
i=xml.sax.InputSource()
i.setByteStream(StringIO.StringIO("""<foo xmlns:mine="martin:von.loewis"><bar xmlns:meiner="martin:von.loewis"><mine:foobar/><meiner:foobar/></bar></foo>"""))
p.parse(i)
print

I get a number of interesting failures. Would you mind looking into
that?

On a related note, it seems that "<xml:hello/>" won't unparse
properly, either...

Regards,
Martin


From mal@lemburg.com  Wed Sep 27 19:53:24 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 20:53:24 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14801.10496.986326.537462@mace.lucasdigital.com>
 <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <14802.11605.281385.45283@mace.lucasdigital.com>
Message-ID: <39D24224.EAF1E144@lemburg.com>

Victor the Cleaner wrote:
> 
> Tim Peters writes:
> | [Victor the Cleaner]
> | > Jeremy asked me to send this report (which I originally sent just to
> | > him) along to the rest of python-dev, so here ya go:
> |
> | Bugs reports should go to SourceForge, else as often as not they'll got
> | lost.
> 
> Sorry, this wasn't intended to be bug report (not yet, at least).
> Jeremy asked for feedback on the release, and that's all I was trying
> to give.
> 
> | When you do this from a shell:
> |
> | >>> u"A".encode("unicode-internal")
> | 'A\000'
> | >>>
> |
> | I bet you get
> |
> | '\000A'
> |
> | Right?
> 
> Right, as usual. :)  Sounds like MAL already has this one fixed,
> too...

It is fixed in CVS ... don't know if the patch made it into
the release though. The new test now uses UTF-8 as encoding
which is endian-independent.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Wed Sep 27 20:25:54 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 15:25:54 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D24224.EAF1E144@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>

[Victor the Cleaner]
> Sorry, this wasn't intended to be bug report (not yet, at least).
> Jeremy asked for feedback on the release, and that's all I was trying
> to give.

Tommy B, is that you, hiding behind a Victor mask?  Cool!  I was really
directing my rancor at Jeremy <wink>:  by the time he fwd'ed the msg here,
it was already too late to change the release, so it had already switched
from "feedback" to "bug".

[MAL]
> It is fixed in CVS ... don't know if the patch made it into
> the release though. The new test now uses UTF-8 as encoding
> which is endian-independent.

Alas, it was not in the release.  I didn't even know about it until after
the installers were all built and shipped.  Score another for last-second
improvements <0.5 wink>.

Very, very weird:  we all know that SHA is believed to be cryptologically
secure, so there was no feasible way to deduce why the hashes were
different.  But I was coming down with a fever at the time (now in full
bloom, alas), and just stared at the two hashes:

    good:  b88684df19fca8c3d0ab31f040dd8de89f7836fe
    bad:   e052289ecef97fc89c794cf663cb74a64631d34e

Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
is for 'endian'".  Else I wouldn't have had a clue!

should-get-sick-more-often-i-guess-ly y'rs  - tim




From mal@lemburg.com  Wed Sep 27 20:38:13 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 21:38:13 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
Message-ID: <39D24CA5.7F914B7E@lemburg.com>

[Tim Peters wrote about the test_unicodedata.py glitch]:
> 
> [MAL]
> > It is fixed in CVS ... don't know if the patch made it into
> > the release though. The new test now uses UTF-8 as encoding
> > which is endian-independent.
> 
> Alas, it was not in the release.  I didn't even know about it until after
> the installers were all built and shipped.  Score another for last-second
> improvements <0.5 wink>.

You're right. This shouldn't have been applied so close to the
release date/time. Looks like all reviewers work on little
endian machines...
 
> Very, very weird:  we all know that SHA is believed to be cryptologically
> secure, so there was no feasible way to deduce why the hashes were
> different. But I was coming down with a fever at the time (now in full
> bloom, alas), and just stared at the two hashes:
> 
>     good:  b88684df19fca8c3d0ab31f040dd8de89f7836fe
>     bad:   e052289ecef97fc89c794cf663cb74a64631d34e
> 
> Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
> fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
> is for 'endian'".  Else I wouldn't have had a clue!

Well, let's think of it as a hidden feature: the test fails
if and only if it is run on a big endian machine... should
have named the test to something more obvious, e.g.
test_littleendian.py ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy@beopen.com  Wed Sep 27 20:59:52 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 15:59:52 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D24CA5.7F914B7E@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
 <39D24CA5.7F914B7E@lemburg.com>
Message-ID: <14802.20920.420649.929910@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal@lemburg.com> writes:

  MAL> [Tim Peters wrote about the test_unicodedata.py glitch]:
  >>
  >> [MAL]
  >> > It is fixed in CVS ... don't know if the patch made it into the
  >> > release though. The new test now uses UTF-8 as encoding which
  >> > is endian-independent.
  >>
  >> Alas, it was not in the release.  I didn't even know about it
  >> until after the installers were all built and shipped.  Score
  >> another for last-second improvements <0.5 wink>.

  MAL> You're right. This shouldn't have been applied so close to the
  MAL> release date/time. Looks like all reviewers work on little
  MAL> endian machines...
 
Yes.  I was bit reckless; test_unicodedata and the latest distutils
checkins had been made in following the official code freeze and were
not being added to fix a showstopper bug.  I should have deferred
them.

We'll have to be a lot more careful about the 2.0 final release.  PEP
200 has a tenative ship date of Oct. 10.  We should probably have a
code freeze on Oct. 6 and leave the weekend and Monday for verifying
that there are no build problems on little- and big-endian platforms.

Jeremy


From skip@mojam.com (Skip Montanaro)  Wed Sep 27 21:15:23 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 27 Sep 2000 15:15:23 -0500 (CDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.20920.420649.929910@bitdiddle.concentric.net>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
 <39D24CA5.7F914B7E@lemburg.com>
 <14802.20920.420649.929910@bitdiddle.concentric.net>
Message-ID: <14802.21851.446506.215291@beluga.mojam.com>

    Jeremy> We'll have to be a lot more careful about the 2.0 final release.
    Jeremy> PEP 200 has a tenative ship date of Oct. 10.  We should probably
    Jeremy> have a code freeze on Oct. 6 and leave the weekend and Monday
    Jeremy> for verifying that there are no build problems on little- and
    Jeremy> big-endian platforms.

Since you can't test on all platforms, if you fix platform-specific bugs
bettween now and final release, I suggest you make bundles (tar, Windows
installer, whatever) available (without need for CVS) and specifically ask
the people who reported those bugs to check things out using the appropriate
bundle(s).  This is as opposed to making such stuff available and then
posting a general note to the various mailing lists asking people to try
things out.  I think if you're more direct with people who have
"interesting" platforms, you will improve the chances of wringing out a few
more bugs before the actual release.

Skip



From jeremy@beopen.com  Wed Sep 27 22:10:21 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 17:10:21 -0400 (EDT)
Subject: [Python-Dev] buffer overlow in PC/getpathp.c
Message-ID: <14802.25149.170239.848119@bitdiddle.concentric.net>

Mark,

Would you have some time to review PC/getpathp.c for buffer overflow
vulnerabilities?  I just fixed several problems in Modules/getpath.c
that were caused by assuming that certain environment variables and
argv[0] would contain strings less than MAXPATHLEN bytes long.  I
assume the Windows version of the code could have the same
vulnerabilities.  

Jeremy

PS Is there some other Windows expert who could check into this?


From Fredrik Lundh" <effbot@telia.com  Wed Sep 27 22:41:45 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 23:41:45 +0200
Subject: [Python-Dev] stupid floating point question...
Message-ID: <001e01c028cb$bd20f620$766940d5@hagrid>

each unicode character has an optional "numeric value",
which may be a fractional value.

the unicodedata module provides a "numeric" function,
which returns a Python float representing this fraction.
this is currently implemented by a large switch stmnt,
containing entries like:

    case 0x2159:
        return (double) 1 / 6;

if I replace the numbers here with integer variables (read
from the character type table) and return the result to
Python, will str(result) be the same thing as before for all
reasonable values?

</F>



From tim_one@email.msn.com  Wed Sep 27 22:39:21 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 17:39:21 -0400
Subject: [Python-Dev] stupid floating point question...
In-Reply-To: <001e01c028cb$bd20f620$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com>

Try again?  I have no idea what you're asking.  Obviously, str(i) won't look
anything like str(1./6) for any integer i, so *that's* not what you're
asking.

> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Fredrik Lundh
> Sent: Wednesday, September 27, 2000 5:42 PM
> To: python-dev@python.org
> Subject: [Python-Dev] stupid floating point question...
>
>
> each unicode character has an optional "numeric value",
> which may be a fractional value.
>
> the unicodedata module provides a "numeric" function,
> which returns a Python float representing this fraction.
> this is currently implemented by a large switch stmnt,
> containing entries like:
>
>     case 0x2159:
>         return (double) 1 / 6;
>
> if I replace the numbers here with integer variables (read
> from the character type table) and return the result to
> Python, will str(result) be the same thing as before for all
> reasonable values?
>
> </F>




From Fredrik Lundh" <effbot@telia.com  Wed Sep 27 22:59:48 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 23:59:48 +0200
Subject: [Python-Dev] stupid floating point question...
References: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com>
Message-ID: <005b01c028ce$4234bb60$766940d5@hagrid>

> Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> look anything like str(1./6) for any integer i, so *that's* not what you're
> asking.

consider this code:

        PyObject* myfunc1(void) {
            return PyFloat_FromDouble((double) A / B);
        }

(where A and B are constants (#defines, or spelled out))

and this code:

        PyObject* myfunc2(int a, int b) {
            return PyFloat_FromDouble((double) a / b);
        }

if I call the latter with a=A and b=B, and pass the resulting
Python float through "str", will I get the same result on all
ANSI-compatible platforms?

(in the first case, the compiler will most likely do the casting
and the division for me, while in the latter case, it's done at
runtime)

</F>



From tommy@ilm.com  Wed Sep 27 22:48:50 2000
From: tommy@ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 14:48:50 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.21851.446506.215291@beluga.mojam.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
 <39D24CA5.7F914B7E@lemburg.com>
 <14802.20920.420649.929910@bitdiddle.concentric.net>
 <14802.21851.446506.215291@beluga.mojam.com>
Message-ID: <14802.27432.535375.758974@mace.lucasdigital.com>

I'll be happy to test IRIX again when the time comes...

Skip Montanaro writes:
| 
|     Jeremy> We'll have to be a lot more careful about the 2.0 final release.
|     Jeremy> PEP 200 has a tenative ship date of Oct. 10.  We should probably
|     Jeremy> have a code freeze on Oct. 6 and leave the weekend and Monday
|     Jeremy> for verifying that there are no build problems on little- and
|     Jeremy> big-endian platforms.
| 
| Since you can't test on all platforms, if you fix platform-specific bugs
| bettween now and final release, I suggest you make bundles (tar, Windows
| installer, whatever) available (without need for CVS) and specifically ask
| the people who reported those bugs to check things out using the appropriate
| bundle(s).  This is as opposed to making such stuff available and then
| posting a general note to the various mailing lists asking people to try
| things out.  I think if you're more direct with people who have
| "interesting" platforms, you will improve the chances of wringing out a few
| more bugs before the actual release.
| 
| Skip
| 
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev@python.org
| http://www.python.org/mailman/listinfo/python-dev


From tommy@ilm.com  Wed Sep 27 22:51:23 2000
From: tommy@ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 14:51:23 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
References: <39D24224.EAF1E144@lemburg.com>
 <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
Message-ID: <14802.27466.120918.480152@mace.lucasdigital.com>

Tim Peters writes:
| [Victor the Cleaner]
| > Sorry, this wasn't intended to be bug report (not yet, at least).
| > Jeremy asked for feedback on the release, and that's all I was trying
| > to give.
| 
| Tommy B, is that you, hiding behind a Victor mask?  Cool!  I was really
| directing my rancor at Jeremy <wink>:  by the time he fwd'ed the msg here,
| it was already too late to change the release, so it had already switched
| from "feedback" to "bug".

Yup, it's me.  I've been leary of posting from my work address for a
long time, but Ping seemed to be getting away with it so I figured
"what the hell" ;)

| 
| Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
| fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
| is for 'endian'".  Else I wouldn't have had a clue!

I thought maybe 'e' was for 'eeeeeew' when you realized this was IRIX ;)

| 
| should-get-sick-more-often-i-guess-ly y'rs  - tim

Or just stay sick.  That's what I do...


From tim_one@email.msn.com  Wed Sep 27 23:08:50 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 18:08:50 -0400
Subject: [Python-Dev] stupid floating point question...
In-Reply-To: <005b01c028ce$4234bb60$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEIIHIAA.tim_one@email.msn.com>

Ah!  I wouldn't worry about this -- go right ahead.  Not only the str()'s,
but even the repr()'s, are very likely to be identical.

A *good* compiler won't collapse *any* fp expressions at compile-time,
because doing so can change the 754 semantics at runtime (for example, the
evaluation of 1./6 triggers the 754 "inexact" signal, and the compiler has
no way to know whether the user is expecting that to happen at runtime, so a
good compiler will leave it alone ... at KSR, I munged our C compiler to
*try* collapsing at compile-time, capturing the 754 state before and
comparing it to the 754 state after, doing that again for each possible
rounding mode, and leaving the runtime code in if and only if any evaluation
changed any state; but, that was a *damned* good compiler <wink>).

> -----Original Message-----
> From: Fredrik Lundh [mailto:effbot@telia.com]
> Sent: Wednesday, September 27, 2000 6:00 PM
> To: Tim Peters; python-dev@python.org
> Subject: Re: [Python-Dev] stupid floating point question...
>
>
> > Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> > look anything like str(1./6) for any integer i, so *that's* not
> > what you're asking.
>
> consider this code:
>
>         PyObject* myfunc1(void) {
>             return PyFloat_FromDouble((double) A / B);
>         }
>
> (where A and B are constants (#defines, or spelled out))
>
> and this code:
>
>         PyObject* myfunc2(int a, int b) {
>             return PyFloat_FromDouble((double) a / b);
>         }
>
> if I call the latter with a=A and b=B, and pass the resulting
> Python float through "str", will I get the same result on all
> ANSI-compatible platforms?
>
> (in the first case, the compiler will most likely do the casting
> and the division for me, while in the latter case, it's done at
> runtime)




From mal@lemburg.com  Wed Sep 27 23:08:42 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 28 Sep 2000 00:08:42 +0200
Subject: [Python-Dev] stupid floating point question...
References: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com> <005b01c028ce$4234bb60$766940d5@hagrid>
Message-ID: <39D26FEA.E17675AA@lemburg.com>

Fredrik Lundh wrote:
> 
> > Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> > look anything like str(1./6) for any integer i, so *that's* not what you're
> > asking.
> 
> consider this code:
> 
>         PyObject* myfunc1(void) {
>             return PyFloat_FromDouble((double) A / B);
>         }
> 
> (where A and B are constants (#defines, or spelled out))
> 
> and this code:
> 
>         PyObject* myfunc2(int a, int b) {
>             return PyFloat_FromDouble((double) a / b);
>         }
> 
> if I call the latter with a=A and b=B, and pass the resulting
> Python float through "str", will I get the same result on all
> ANSI-compatible platforms?
> 
> (in the first case, the compiler will most likely do the casting
> and the division for me, while in the latter case, it's done at
> runtime)

Casts have a higher precedence than e.g. /, so (double)a/b will
be compiled as ((double)a)/b.

If you'd rather play safe, just add the extra parenthesis.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From m.favas@per.dem.csiro.au  Wed Sep 27 23:08:01 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 28 Sep 2000 06:08:01 +0800
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
Message-ID: <39D26FC1.B8214C80@per.dem.csiro.au>

Jeremy writes...
We'll have to be a lot more careful about the 2.0 final release.  PEP
200 has a tenative ship date of Oct. 10.  We should probably have a
code freeze on Oct. 6 and leave the weekend and Monday for verifying
that there are no build problems on little- and big-endian platforms.

... and 64-bit platforms (or those where sizeof(long) != sizeof(int) !=
4) <grin> - a change yesterday to md5.h caused a compilation failure.
Logged as 
http://sourceforge.net/bugs/?func=detailbug&bug_id=115506&group_id=5470

-- 
Mark Favas  -   m.favas@per.dem.csiro.au
CSIRO, Private Bag No 5, Wembley, Western Australia 6913, AUSTRALIA


From tim_one@email.msn.com  Wed Sep 27 23:40:10 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 18:40:10 -0400
Subject: [Python-Dev] Python 2.0b2 note for Windows developers
Message-ID: <LNBBLJKPBEHFEDALKOLCCEILHIAA.tim_one@email.msn.com>

Since most Python users on Windows don't have any use for them, I trimmed
the Python 2.0b2 installer by leaving out the debug-build .lib, .pyd, .exe
and .dll files.  If you want them, they're available in a separate zip
archive; read the Windows Users notes at

http://www.pythonlabs.com/products/python2.0/download_python2.0b2.html

for info and a download link.  If you don't already know *why* you might
want them, trust me:  you don't want them <wink>.

they-don't-even-make-good-paperweights-ly y'rs  - tim




From jeremy@beopen.com  Thu Sep 28 03:55:57 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 22:55:57 -0400
Subject: [Python-Dev] RE: buffer overlow in PC/getpathp.c
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEFFDLAA.MarkH@ActiveState.com>
Message-ID: <AJEAKILOCCJMDILAPGJNOEOICBAA.jeremy@beopen.com>

>I would be happy to!  Although I am happy to report that I believe it
>safe - I have been very careful of this from the time I wrote it.
>
>What is the process?  How formal should it be?

Not sure how formal it should be, but I would recommend you review uses of
strcpy and convince yourself that the source string is never longer than the
target buffer.  I am not convinced.  For example, in calculate_path(), char
*pythonhome is initialized from an environment variable and thus has unknown
length.  Later it used in a strcpy(prefix, pythonhome), where prefix has a
fixed length.  This looks like a vulnerability than could be closed by using
strncpy(prefix, pythonhome, MAXPATHLEN).

The Unix version of this code had three or four vulnerabilities of this
sort.  So I imagine the Windows version has those too.  I was imagining that
the registry offered a whole new opportunity to provide unexpectedly long
strings that could overflow buffers.

Jeremy




From MarkH@ActiveState.com  Thu Sep 28 03:53:08 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 28 Sep 2000 13:53:08 +1100
Subject: [Python-Dev] RE: buffer overlow in PC/getpathp.c
In-Reply-To: <AJEAKILOCCJMDILAPGJNOEOICBAA.jeremy@beopen.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEGADLAA.MarkH@ActiveState.com>

> target buffer.  I am not convinced.  For example, in
> calculate_path(), char
> *pythonhome is initialized from an environment variable and thus

Oh - ok - sorry.  I was speaking from memory.  From memory, I believe you
will find the registry functions safe - but likely not the older
environment based stuff, I agree.

I will be happy to look into this.

Mark.



From fdrake@beopen.com  Thu Sep 28 03:57:46 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 27 Sep 2000 22:57:46 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D26FC1.B8214C80@per.dem.csiro.au>
References: <39D26FC1.B8214C80@per.dem.csiro.au>
Message-ID: <14802.45994.485874.454963@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > We'll have to be a lot more careful about the 2.0 final release.  PEP
 > 200 has a tenative ship date of Oct. 10.  We should probably have a
 > code freeze on Oct. 6 and leave the weekend and Monday for verifying
 > that there are no build problems on little- and big-endian platforms.

  And hopefully we'll have a SPARC machine available before then, but
the timeframe is uncertain.

Mark Favas writes:
 > ... and 64-bit platforms (or those where sizeof(long) != sizeof(int) !=
 > 4) <grin> - a change yesterday to md5.h caused a compilation failure.

  I just checked in a patch based on Tim's comment on this; please
test this on your machine if you can.  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From dkwolfe@pacbell.net  Thu Sep 28 16:08:52 2000
From: dkwolfe@pacbell.net (Dan Wolfe)
Date: Thu, 28 Sep 2000 08:08:52 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1L00JDRRD23W@mta6.snfc21.pbi.net>

>> [Seg faults in test_sre.py while testing limits]
>> 
>you could try adding a Mac OS clause to the recursion limit stuff
>in Modules/_sre.c:
>
>#if !defined(USE_STACKCHECK)
>#if defined(...whatever's needed to detect Max OS X...)
>#define USE_RECURSION_LIMIT 5000
>#elif defined(MS_WIN64) || defined(__LP64__) || defined(_LP64)
>/* require smaller recursion limit for a number of 64-bit platforms:
>   Win64 (MS_WIN64), Linux64 (__LP64__), Monterey (64-bit AIX) (_LP64) */
>/* FIXME: maybe the limit should be 40000 / sizeof(void*) ? */
>#define USE_RECURSION_LIMIT 7500
>#else
>#define USE_RECURSION_LIMIT 10000
>#endif
>#endif
>
>replace "...whatever...", and try larger values than 5000 (or smaller,
>if necessary.  10000 is clearly too large for your platform).
>
>(alternatively, you can increase the stack size.  maybe it's very small
>by default?)

Hi /F,

I spotted the USE_STACKCHECK, got curious, and went hunting for it... of 
course curiousity kills the cat... it's time to got to work now.... 
meaning that the large number of replies, counter-replies, code and 
follow up that I'm going to need to wade thru is going to have to wait.

Why you ask, well when you strip Mac OS X down to the core... it's unix 
based and therefore has the getrusage command... which means that I need 
to take a look at some of the patches - 
<http://sourceforge.net/patch/download.php?id=101352>

In the Public Beta the stack size is currently set to 512K by default... 
which is usually enough for most processes... but not sre...

I-should-have-stayed-up-all-night'ly yours,

- Dan


From loewis@informatik.hu-berlin.de  Thu Sep 28 16:37:10 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Thu, 28 Sep 2000 17:37:10 +0200 (MET DST)
Subject: [Python-Dev] stupid floating point question...
Message-ID: <200009281537.RAA21436@pandora.informatik.hu-berlin.de>

> A *good* compiler won't collapse *any* fp expressions at
> compile-time, because doing so can change the 754 semantics at
> runtime (for example, the evaluation of 1./6 triggers the 754
> "inexact" signal, and the compiler has no way to know whether the
> user is expecting that to happen at runtime, so a good compiler will
> leave it alone

Of course, that doesn't say anything about what *most* compilers do.
For example, gcc, on i586-pc-linux-gnu, compiles

double foo(){
	return (double)1/6;
}

into

.LC0:
	.long 0x55555555,0x3fc55555
.text
	.align 4
.globl foo
	.type	 foo,@function
foo:
	fldl .LC0
	ret

when compiling with -fomit-frame-pointer -O2. That still doesn't say
anything about what most compilers do - if there is interest, we could
perform a comparative study on the subject :-)

The "would break 754" argument is pretty weak, IMO - gcc, for example,
doesn't claim to comply to that standard.

Regards,
Martin



From jeremy@beopen.com  Thu Sep 28 17:58:48 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 28 Sep 2000 12:58:48 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.21851.446506.215291@beluga.mojam.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
 <39D24CA5.7F914B7E@lemburg.com>
 <14802.20920.420649.929910@bitdiddle.concentric.net>
 <14802.21851.446506.215291@beluga.mojam.com>
Message-ID: <14803.30920.93791.816163@bitdiddle.concentric.net>

>>>>> "SM" == Skip Montanaro <skip@mojam.com> writes:

  Jeremy> We'll have to be a lot more careful about the 2.0 final
  Jeremy> release.  PEP 200 has a tenative ship date of Oct. 10.  We
  Jeremy> should probably have a code freeze on Oct. 6 and leave the
  Jeremy> weekend and Monday for verifying that there are no build
  Jeremy> problems on little- and big-endian platforms.

  SM> Since you can't test on all platforms, if you fix
  SM> platform-specific bugs bettween now and final release, I suggest
  SM> you make bundles (tar, Windows installer, whatever) available
  SM> (without need for CVS) and specifically ask the people who
  SM> reported those bugs to check things out using the appropriate
  SM> bundle(s).

Good idea!  I've set up a cron job that will build a tarball every
night at 3am and place it on the ftp server at python.beopen.com:
    ftp://python.beopen.com/pub/python/snapshots/

I've started things off with a tar ball I built just now.
    Python-2.0b2-devel-2000-09-28.tar.gz

Tommy -- Could you use this snapshot to verify that the unicode test
is fixed?

Jeremy



From thomas.heller@ion-tof.com  Thu Sep 28 18:05:02 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Thu, 28 Sep 2000 19:05:02 +0200
Subject: [Python-Dev] Re: [Distutils] Distutils 1.0 code freeze: Oct 1
References: <20000926205312.A1470@beelzebub>
Message-ID: <02af01c0296e$40cf1b30$4500a8c0@thomasnb>

> If you know about any outstanding Distutils bugs, please tell me *now*.
> Put 'em in the SourceForge bug database if you're wondering why I
> haven't fixed them yet -- they might have gotten lost, I might not know
> about 'em, etc.  If you're not sure, put it in SourceForge.

Mike Fletcher found a another bug: Building extensions on windows
(at least with MSVC) in debug mode link with the wrong python
import library. This leads to crashes because the extension
loads the wrong python dll at runtime.

Will report this on sourceforge, although I doubt Greg will be able
to fix this...

Distutils code freeze: Greg, I have some time next week to work on
this. Do you give me permission to check it in if I find a solution?

Thomas



From martin@loewis.home.cs.tu-berlin.de  Thu Sep 28 20:32:00 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 28 Sep 2000 21:32:00 +0200
Subject: [Python-Dev] Dynamically loaded extension modules on MacOS X
Message-ID: <200009281932.VAA01999@loewis.home.cs.tu-berlin.de>

Has anybody succeeded in building extension modules for 2.0b1 on MacOS
X? On xml-sig, we had a report that the pyexpat module would not build
dynamically when building was initiated by the distutils, see the
report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=115544&group_id=6473

Essentially, Python was configured with "-with-threads -with-dyld
-with-suffix=.exe", which causes extension modules to be linked as

cc -bundle -prebind {object files} -o {target}.so

With this linker line, the linker reported

/usr/bin/ld: warning -prebind has no effect with -bundle

and then

/usr/bin/ld: Undefined symbols:
_PyArg_ParseTuple
_PyArg_ParseTupleAndKeywords
...*removed a few dozen more symbols*...

So apparently the command line options are bogus for the compiler,
which identifies itself as

    Reading specs from /usr/libexec/ppc/2.95.2/specs
    Apple Computer, Inc. version cc-796.3, based on gcc driver version
     2.7.2.1 executing gcc version 2.95.2

Also, these options apparently won't cause creation of a shared
library. I wonder whether a simple "cc -shared" won't do the trick -
can a Mac expert enlighten me?

Regards,
Martin


From tommy@ilm.com  Thu Sep 28 20:38:54 2000
From: tommy@ilm.com (Victor the Cleaner)
Date: Thu, 28 Sep 2000 12:38:54 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14803.30920.93791.816163@bitdiddle.concentric.net>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
 <39D24CA5.7F914B7E@lemburg.com>
 <14802.20920.420649.929910@bitdiddle.concentric.net>
 <14802.21851.446506.215291@beluga.mojam.com>
 <14803.30920.93791.816163@bitdiddle.concentric.net>
Message-ID: <14803.40496.957808.858138@mace.lucasdigital.com>

Jeremy Hylton writes:
| 
| I've started things off with a tar ball I built just now.
|     Python-2.0b2-devel-2000-09-28.tar.gz
| 
| Tommy -- Could you use this snapshot to verify that the unicode test
| is fixed?


Sure thing.  I just tested it and it passed test_unicodedata.  Looks
good on this end...


From tim_one@email.msn.com  Thu Sep 28 20:59:55 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 15:59:55 -0400
Subject: [Python-Dev] RE: stupid floating point question...
In-Reply-To: <200009281537.RAA21436@pandora.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELFHIAA.tim_one@email.msn.com>

[Tim]
> A *good* compiler won't collapse *any* fp expressions at
> compile-time ...

[Martin von Loewis]
> Of course, that doesn't say anything about what *most* compilers do.

Doesn't matter in this case; I told /F not to worry about it having taken
that all into account.  Almost all C compilers do a piss-poor job of taking
floating-point seriously, but it doesn't really matter for the purpose /F
has in mind.

[an example of gcc precomputing the best possible result]
> 	return (double)1/6;
> ...
> 	.long 0x55555555,0x3fc55555

No problem.  If you set the HW rounding mode to +infinity during
compilation, the first chunk there would end with a 6 instead.  Would affect
the tail end of the repr(), but not the str().

> ...
> when compiling with -fomit-frame-pointer -O2. That still doesn't say
> anything about what most compilers do - if there is interest, we could
> perform a comparative study on the subject :-)

No need.

> The "would break 754" argument is pretty weak, IMO - gcc, for example,
> doesn't claim to comply to that standard.

/F's question was about fp.  754 is the only hope he has for any x-platform
consistency (C89 alone gives no hope at all, and no basis for answering his
question).  To the extent that a C compiler ignores 754, it makes x-platform
fp consistency impossible (which, btw, Python inherits from C:  we can't
even manage to get string<->float working consistently across 100%
754-conforming platforms!).  Whether that's a weak argument or not depends
entirely on how important x-platform consistency is to a given app.  In /F's
specific case, a sloppy compiler is "good enough".

i'm-the-only-compiler-writer-i-ever-met-who-understood-fp<0.5-wink>-ly
    y'rs  - tim




From Fredrik Lundh" <effbot@telia.com  Thu Sep 28 21:40:34 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 22:40:34 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCGELFHIAA.tim_one@email.msn.com>
Message-ID: <004f01c0298c$62ba2320$766940d5@hagrid>

tim wrote:
> > Of course, that doesn't say anything about what *most* compilers do.
> 
> Doesn't matter in this case; I told /F not to worry about it having taken
> that all into account.  Almost all C compilers do a piss-poor job of taking
> floating-point seriously, but it doesn't really matter for the purpose /F
> has in mind.

to make it clear for everyone: I'm planning to get rid of the last
remaining switch statement in unicodectype.c ("numerical value"),
and replace the doubles in there with rationals.

the problem here is that MAL's new test suite uses "str" on the
return value from that function, and it would a bit annoying if we
ended up with a Unicode test that might fail on platforms with
lousy floating point support...

:::

on the other hand, I'm not sure I think it's a really good idea to
have "numeric" return a floating point value.  consider this:

>>> import unicodedata
>>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
0.33333333333333331

(the glyph looks like "1/3", and that's also what the numeric
property field in the Unicode database says)

:::

if I had access to the time machine, I'd change it to:

>>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
(1, 3)

...but maybe we can add an alternate API that returns the
*exact* fraction (as a numerator/denominator tuple)?

>>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
(1, 3)

(hopefully, someone will come up with a better name)

</F>



From ping@lfw.org  Thu Sep 28 21:35:24 2000
From: ping@lfw.org (The Ping of Death)
Date: Thu, 28 Sep 2000 15:35:24 -0500 (CDT)
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
In-Reply-To: <004f01c0298c$62ba2320$766940d5@hagrid>
Message-ID: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>

On Thu, 28 Sep 2000, Fredrik Lundh wrote:
> if I had access to the time machine, I'd change it to:
> 
> >>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
> 
> ...but maybe we can add an alternate API that returns the
> *exact* fraction (as a numerator/denominator tuple)?
> 
> >>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
> 
> (hopefully, someone will come up with a better name)

unicodedata.rational might be an obvious choice.

    >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
    (1, 3)


-- ?!ng



From tim_one@email.msn.com  Thu Sep 28 21:52:28 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 16:52:28 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>

[/F]
> ...but maybe we can add an alternate API that returns the
> *exact* fraction (as a numerator/denominator tuple)?
>
> >>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
>
> (hopefully, someone will come up with a better name)

[The Ping of Death]

LOL!  Great name, Ping.

> unicodedata.rational might be an obvious choice.
>
>     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
>     (1, 3)

Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
too.

leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
    ly y'ts  - the timmy of death




From thomas@xs4all.net  Thu Sep 28 21:53:30 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 28 Sep 2000 22:53:30 +0200
Subject: [Python-Dev] 2.0b2 on Slackware 7.0
In-Reply-To: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 04:32:21PM -0400
References: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>
Message-ID: <20000928225330.A26568@xs4all.nl>

On Tue, Sep 26, 2000 at 04:32:21PM -0400, Fred L. Drake, Jr. wrote:

>   I just built and tested 2.0b2 on Slackware 7.0, and found that
> threads failed miserably.  I got the message:

> pthread_cond_wait: Interrupted system call

>   If anyone has any ideas, please send them along!  I'll turn this
> into a real bug report later.

I'm inclined to nudge this towards a libc bug... The exact version of glibc
Slackware 7 uses would be important, in that case. Redhat has been using
glibc 2.1.3 for a while, which seems stable, but I have no clue what
Slackware is using nowadays (I believe they were one of the last
of the major distributions to move to glibc, but I might be mistaken.) And
then there is the possibility of optimization bugs in the gcc that compiled
Python or the gcc that compiled the libc/libpthreads. 

(That last bit is easy to test though: copy the python binary from a working
linux machine with the same kernel major version & libc major version. If it
works, it's an optimization bug. If it works bug exhibits the same bug, it's
probably libc/libpthreads causing it somehow. If it fails to start
altogether, Slackware is using strange libs (and they might be the cause of
the bug, or might be just the *exposer* of the bug.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Fredrik Lundh" <effbot@telia.com  Thu Sep 28 22:14:45 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 23:14:45 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>
Message-ID: <00cb01c02991$23f61360$766940d5@hagrid>

tim wrote:
> leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
>     ly y'ts  - the timmy of death

oh, the unicode folks have figured that one out:

>>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character

</F>



From Fredrik Lundh" <effbot@telia.com  Thu Sep 28 22:49:13 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 23:49:13 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>
Message-ID: <002a01c02996$9b1742c0$766940d5@hagrid>

tim wrote:
> > unicodedata.rational might be an obvious choice.
> >
> >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> >     (1, 3)
> 
> Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
> too.

should I interpret this as a +1, or should I write a PEP on
this topic? ;-)

</F>



From tim_one@email.msn.com  Thu Sep 28 23:12:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:12:23 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <00cb01c02991$23f61360$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELNHIAA.tim_one@email.msn.com>

[tim]
> leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
>     ly y'ts  - the timmy of death

[/F]
> oh, the unicode folks have figured that one out:
>
> >>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character

Ya, except I'm starting to suspect they're not floating-point experts
either:

>>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character
>>> unicodedata.numeric(u"\N{EULER CONSTANT}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character
>>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
UnicodeError: Unicode-Escape decoding error: Invalid Unicode Character Name
>>>




From mal@lemburg.com  Thu Sep 28 23:30:03 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:30:03 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating
 pointquestion...)
References: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>
Message-ID: <39D3C66B.3A3350AE@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > unicodedata.rational might be an obvious choice.
> > >
> > >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> > >     (1, 3)
> >
> > Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
> > too.
> 
> should I interpret this as a +1, or should I write a PEP on
> this topic? ;-)

+1 from here. 

I really only chose floats to get all possibilities (digit, decimal
and fractions) into one type... Python should support rational numbers
some day.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Thu Sep 28 23:32:50 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:32:50 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <002a01c02996$9b1742c0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELNHIAA.tim_one@email.msn.com>

[The Ping of Death suggests unicodedata.rational]
>     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
>     (1, 3)

[Timmy replies]
> Perfect -- another great name.  Beats all heck out of
> unicodedata.vulgar() too.

[/F inquires]
> should I interpret this as a +1, or should I write a PEP on
> this topic? ;-)

I'm on vacation (but too ill to do much besides alternate sleep & email
<snarl>), and I'm not sure we have clear rules about how votes from
commercial Python developers count when made on their own time.  Perhaps a
meta-PEP first to resolve that issue?

Oh, all right, just speaking for myself, I'm +1 on The Ping of Death's name
suggestion provided this function is needed at all.  But not being a Unicode
Guy by nature, I have no opinion on whether the function *is* needed (I
understand how digits work in American English, and ord(ch)-ord('0') is the
limit of my experience; can't say whether even the current .numeric() is
useful for Klingons or Lawyers or whoever it is who expects to get a numeric
value out of a character for 1/2 or 1/3).




From mal@lemburg.com  Thu Sep 28 23:33:50 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:33:50 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
References: <LNBBLJKPBEHFEDALKOLCCELNHIAA.tim_one@email.msn.com>
Message-ID: <39D3C74E.B1952909@lemburg.com>

Tim Peters wrote:
> 
> [tim]
> > leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
> >     ly y'ts  - the timmy of death
> 
> [/F]
> > oh, the unicode folks have figured that one out:
> >
> > >>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> 
> Ya, except I'm starting to suspect they're not floating-point experts
> either:
> 
> >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> UnicodeError: Unicode-Escape decoding error: Invalid Unicode Character Name
> >>>

Perhaps you should submit these for Unicode 4.0 ;-)

But really, I don't suspect that anyone is going to do serious
character to number conversion on these esoteric characters. Plain
old digits will do just as they always have (or does anyone know
of ways to represent irrational numbers on PCs by other means than
an algorithm which spits out new digits every now and then ?).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Thu Sep 28 23:38:47 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:38:47 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
References: <LNBBLJKPBEHFEDALKOLCOELNHIAA.tim_one@email.msn.com>
Message-ID: <39D3C877.BDBC52DF@lemburg.com>

Tim Peters wrote:
> 
> [The Ping of Death suggests unicodedata.rational]
> >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> >     (1, 3)
> 
> [Timmy replies]
> > Perfect -- another great name.  Beats all heck out of
> > unicodedata.vulgar() too.
> 
> [/F inquires]
> > should I interpret this as a +1, or should I write a PEP on
> > this topic? ;-)
> 
> I'm on vacation (but too ill to do much besides alternate sleep & email
> <snarl>), and I'm not sure we have clear rules about how votes from
> commercial Python developers count when made on their own time.  Perhaps a
> meta-PEP first to resolve that issue?
> 
> Oh, all right, just speaking for myself, I'm +1 on The Ping of Death's name
> suggestion provided this function is needed at all.  But not being a Unicode
> Guy by nature, I have no opinion on whether the function *is* needed (I
> understand how digits work in American English, and ord(ch)-ord('0') is the
> limit of my experience; can't say whether even the current .numeric() is
> useful for Klingons or Lawyers or whoever it is who expects to get a numeric
> value out of a character for 1/2 or 1/3).

The reason for "numeric" being available at all is that the
UnicodeData.txt file format specifies such a field. I don't believe
anyone will make serious use of it though... e.g. 2² would parse as 22
and not evaluate to 4.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Thu Sep 28 23:48:08 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:48:08 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
In-Reply-To: <39D3C74E.B1952909@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>

[Tim]
> >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> UnicodeError: Unicode-Escape decoding error: Invalid Unicode
                Character Name

[MAL]
> Perhaps you should submit these for Unicode 4.0 ;-)

Note that the first two are already there; they just don't have an
associated numerical value.  The last one was a hint that I was trying to
write a frivolous msg while giving my "<wink>" key a break <wink>.

> But really, I don't suspect that anyone is going to do serious
> character to number conversion on these esoteric characters. Plain
> old digits will do just as they always have ...

Which is why I have to wonder whether there's *any* value in exposing the
numeric-value property beyond regular old digits.




From MarkH@ActiveState.com  Fri Sep 29 02:36:11 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 29 Sep 2000 12:36:11 +1100
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>

Hi all,
	I'd like some feedback on a patch assigned to me.  It is designed to
prevent Python extensions built for an earlier version of Python from
crashing the new version.

I haven't actually tested the patch, but I am sure it works as advertised
(who is db31 anyway?).

My question relates more to the "style" - the patch locates the new .pyd's
address in memory, and parses through the MS PE/COFF format, locating the
import table.  If then scans the import table looking for Pythonxx.dll, and
compares any found entries with the current version.

Quite clever - a definite plus is that is should work for all old and
future versions (of Python - dunno about Windows ;-) - but do we want this
sort of code in Python?  Is this sort of hack, however clever, going to
some back and bite us?

Second related question:  if people like it, is this feature something we
can squeeze in for 2.0?

If there are no objections to any of this, I am happy to test it and check
it in - but am not confident of doing so without some feedback.

Thanks,

Mark.



From MarkH@ActiveState.com  Fri Sep 29 02:42:01 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 29 Sep 2000 12:42:01 +1100
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEIIDLAA.MarkH@ActiveState.com>

> Hi all,
> 	I'd like some feedback on a patch assigned to me.

sorry -
http://sourceforge.net/patch/?func=detailpatch&patch_id=101676&group_id=547
0

Mark.



From tim_one@email.msn.com  Fri Sep 29 03:24:24 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 22:24:24 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMHHIAA.tim_one@email.msn.com>

This is from 2.0b2 Windows, and typical:

C:\Python20>python -v
# C:\PYTHON20\lib\site.pyc has bad magic
import site # from C:\PYTHON20\lib\site.py
# wrote C:\PYTHON20\lib\site.pyc
# C:\PYTHON20\lib\os.pyc has bad magic
import os # from C:\PYTHON20\lib\os.py
# wrote C:\PYTHON20\lib\os.pyc
import nt # builtin
# C:\PYTHON20\lib\ntpath.pyc has bad magic
import ntpath # from C:\PYTHON20\lib\ntpath.py
# wrote C:\PYTHON20\lib\ntpath.pyc
# C:\PYTHON20\lib\stat.pyc has bad magic
import stat # from C:\PYTHON20\lib\stat.py
# wrote C:\PYTHON20\lib\stat.pyc
# C:\PYTHON20\lib\string.pyc has bad magic
import string # from C:\PYTHON20\lib\string.py
# wrote C:\PYTHON20\lib\string.pyc
import strop # builtin
# C:\PYTHON20\lib\UserDict.pyc has bad magic
import UserDict # from C:\PYTHON20\lib\UserDict.py
# wrote C:\PYTHON20\lib\UserDict.pyc
Python 2.0b2 (#6, Sep 26 2000, 14:59:21) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>>

That is, .pyc's don't work at all anymore on Windows:  Python *always*
thinks they have a bad magic number.  Elsewhere?

Also noticed that test_popen2 got broken on Windows after 2.0b2, for a very
weird reason:

C:\Code\python\dist\src\PCbuild>python ../lib/test/test_popen2.py
Test popen2 module:
testing popen2...
testing popen3...
Traceback (most recent call last):
  File "../lib/test/test_popen2.py", line 64, in ?
    main()
  File "../lib/test/test_popen2.py", line 23, in main
    popen2._test()
  File "c:\code\python\dist\src\lib\popen2.py", line 188, in _test
    for inst in _active[:]:
NameError: There is no variable named '_active'

C:\Code\python\dist\src\PCbuild>

C:\Code\python\dist\src\PCbuild>python ../lib/popen2.py
testing popen2...
testing popen3...
Traceback (most recent call last):
  File "../lib/popen2.py", line 195, in ?
    _test()
  File "../lib/popen2.py", line 188, in _test
    for inst in _active[:]:
NameError: There is no variable named '_active'

C:\Code\python\dist\src\PCbuild>

Ah!  That's probably because of this clever new code:

if sys.platform[:3] == "win":
    # Some things don't make sense on non-Unix platforms.
    del Popen3, Popen4, _active, _cleanup

If I weren't on vacation, I'd check in a fix <wink>.




From fdrake@beopen.com  Fri Sep 29 03:25:00 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 22:25:00 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <20000927003233.C19872@ActiveState.com>
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
 <20000927003233.C19872@ActiveState.com>
Message-ID: <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>

Trent Mick writes:
 > I was playing with a different SourceForge project and I screwed up my
 > CVSROOT (used Python's instead). Sorry SOrry!

  Well, you blew it.  Don't worry, we'll have you kicked off
SourceForge in no time!  ;)
  Well, maybe not.  I've submitted a support request to fix this:

http://sourceforge.net/support/?func=detailsupport&support_id=106112&group_id=1


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From m.favas@per.dem.csiro.au  Fri Sep 29 03:49:54 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Fri, 29 Sep 2000 10:49:54 +0800
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
Message-ID: <39D40352.5C511629@per.dem.csiro.au>

Tim writes:
That is, .pyc's don't work at all anymore on Windows:  Python *always*
thinks they have a bad magic number.  Elsewhere?

Just grabbed the latest from CVS - .pyc is still fine on Tru64 Unix...

Mark
-- 
Email - m.favas@per.dem.csiro.au       Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074           CSIRO Exploration &
Mining
Fax   - +61 8 9387 8642                         Private Bag No 5
                                                Wembley, Western
Australia 6913


From nhodgson@bigpond.net.au  Fri Sep 29 04:58:41 2000
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Fri, 29 Sep 2000 13:58:41 +1000
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <045201c029c9$8f49fd10$8119fea9@neil>

[Tim]
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

   Running (in IDLE or PythonWin with a font that covers most of Unicode
like Tahoma):
import unicodedata
for c in range(0x10000):
 x=unichr(c)
 try:
    b = unicodedata.numeric(x)
    #print "numeric:", repr(x)
    try:
      a = unicodedata.digit(x)
      if a != b:
       print "bad" , repr(x)
    except:
      print "Numeric but not digit", hex(c), x.encode("utf8"), "numeric ->",
b
 except:
  pass

   Finds about 130 characters. The only ones I feel are worth worrying about
are the half, quarters and eighths (0xbc, 0xbd, 0xbe, 0x215b, 0x215c,
0x215d, 0x215e) which are commonly used for expressing the prices of stocks
and commodities in the US. This may be rarely used but it is better to have
it available than to have people coding up their own translation tables.

   The 0x302* 'Hangzhou' numerals look like they should be classified as
digits.

   Neil




From tim_one@email.msn.com  Fri Sep 29 04:27:55 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:27:55 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <39D40352.5C511629@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>

[Tim]
> That is, .pyc's don't work at all anymore on Windows:  Python *always*
> thinks they have a bad magic number.  Elsewhere?

[Mark Favas]
> Just grabbed the latest from CVS - .pyc is still fine on Tru64 Unix...

Good clue!  Looks like Guido broke this on Windows when adding some
"exclusive write" silliness <wink> for Unixoids.  I'll try to make time
tonight to understand it (*looks* like fdopen is too late to ask for binary
mode under Windows ...).




From tim_one@email.msn.com  Fri Sep 29 04:40:49 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:40:49 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>

Any Unix geek awake?  import.c has this, starting at line 640:

#if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
...
	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);

I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
the question is whether it will break Unices if it's there ...




From esr@thyrsus.com  Fri Sep 29 04:59:12 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Thu, 28 Sep 2000 23:59:12 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Sep 28, 2000 at 11:40:49PM -0400
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com> <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <20000928235912.A9339@thyrsus.com>

Tim Peters <tim_one@email.msn.com>:
> Any Unix geek awake?  import.c has this, starting at line 640:
> 
> #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
> ...
> 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
> 
> I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
> O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
> the question is whether it will break Unices if it's there ...

It will.  In particular, there us no such flag on Linux.  However
the workaround is trivial:

1. Make your flagargument O_EXCL|O_CREAT|O_WRONLY|O_TRUNC|O_BINARY

2. Above it somewhere, write

#ifndef O_BINARY
#define O_BINARY	0
#endif

Quite painless.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Society in every state is a blessing, but government even in its best
state is but a necessary evil; in its worst state an intolerable one;
for when we suffer, or are exposed to the same miseries *by a
government*, which we might expect in a country *without government*,
our calamities is heightened by reflecting that we furnish the means
by which we suffer."
	-- Thomas Paine


From tim_one@email.msn.com  Fri Sep 29 04:47:55 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:47:55 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMMHIAA.tim_one@email.msn.com>

Nevermind.  Fixed it in a way that will be safe everywhere.

> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Tim Peters
> Sent: Thursday, September 28, 2000 11:41 PM
> To: Mark Favas; python-dev@python.org
> Subject: RE: [Python-Dev] .pyc broken on Windows -- anywhere else?
>
>
> Any Unix geek awake?  import.c has this, starting at line 640:
>
> #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
> ...
> 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
>
> I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
> O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
> the question is whether it will break Unices if it's there ...




From fdrake@beopen.com  Fri Sep 29 04:48:49 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 23:48:49 -0400 (EDT)
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
 <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <14804.4385.22560.522921@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Any Unix geek awake?  import.c has this, starting at line 640:

  Probably quite a few!

 > #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
 > ...
 > 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
 > 
 > I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
 > O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
 > the question is whether it will break Unices if it's there ...

  I think it varies substantially.  I just checked on a FreeBSD
machine in /use/include/*.h and /usr/include/*/*.h, and grep said it
wasn't there.  It is defined on my Linux box, however.
  Since O_BINARY is a no-op for Unix, you can do this:

#if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
#ifndef O_BINARY
#define O_BINARY (0)
#endif
...
	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Fri Sep 29 04:51:44 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 23:51:44 -0400 (EDT)
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <20000928235912.A9339@thyrsus.com>
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
 <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
 <20000928235912.A9339@thyrsus.com>
Message-ID: <14804.4560.644795.806373@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > It will.  In particular, there us no such flag on Linux.  However
 > the workaround is trivial:

  Ah, looking back at my grep output, I see that it's defined by a lot
of libraries, but not the standard headers.  It *is* defined by the
Apache API headers, kpathsea, MySQL, OpenSSL, and Qt.  And that's just
from what I have installed.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw@beopen.com  Fri Sep 29 07:06:33 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 29 Sep 2000 02:06:33 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
 <20000927003233.C19872@ActiveState.com>
Message-ID: <14804.12649.504962.985774@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:

    TM> I was playing with a different SourceForge project and I
    TM> screwed up my CVSROOT (used Python's instead). Sorry SOrry!

    TM> How do I undo this cleanly? I could 'cvs remove' the
    TM> README.txt file but that would still leave the top-level
    TM> 'black/' turd right? Do the SourceForge admin guys have to
    TM> manually kill the 'black' directory in the repository?

One a directory's been added, it's nearly impossible to cleanly delete
it from CVS.  If it's infected people's working directories, you're
really screwed, because even if the SF admins remove it from the
repository, it'll be a pain to clean up on the client side.

Probably best thing to do is make sure you "cvs rm" everything in the
directory and then just let "cvs up -P" remove the empty directory.
Everybody /is/ using -P (and -d) right? :)

-Barry


From Fredrik Lundh" <effbot@telia.com  Fri Sep 29 08:01:37 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 09:01:37 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <007301c029e3$612e1960$766940d5@hagrid>

tim wrote:
> > But really, I don't suspect that anyone is going to do serious
> > character to number conversion on these esoteric characters. Plain
> > old digits will do just as they always have ...
> 
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

the unicode database has three fields dealing with the numeric
value: decimal digit value (integer), digit value (integer), and
numeric value (integer *or* rational):

    "This is a numeric field. If the character has the numeric
    property, as specified in Chapter 4 of the Unicode Standard,
    the value of that character is represented with an integer or
    rational number in this field."

here's today's proposal: let's claim that it's a bug to return a float
from "numeric", and change it to return a string instead.

(this will match "decomposition", which is also "broken" -- it really
should return a tag followed by a sequence of unicode characters).

</F>



From martin@loewis.home.cs.tu-berlin.de  Fri Sep 29 08:01:19 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 29 Sep 2000 09:01:19 +0200
Subject: [Python-Dev] Python-Dev] Patch to avoid conflict with older versions of Python.
Message-ID: <200009290701.JAA01119@loewis.home.cs.tu-berlin.de>

> but do we want this sort of code in Python?

Since I proposed a more primitive approach to solve the same problem
(which you had postponed), I'm obviously in favour of that patch.

> Is this sort of hack, however clever, going to some back and bite us?

I can't see why. The code is quite defensive: If the data structures
don't look like what it expects, it gives up and claims it can't find
the version of the python dll used by this module.

So in worst case, we get what we have now.

My only concern is that it assumes the HMODULE is an address which can
be dereferenced. If there was some MS documentation stating that this
is guaranteed in Win32, it'd be fine. If it is merely established fact
that all Win32 current implementations implement HMODULE that way, I'd
rather see a __try/__except around that - but that would only add to
the defensive style of this patch.

A hack is required since earlier versions of Python did not consider
this problem. I don't know whether python20.dll will behave reasonably
when loaded into Python 2.1 next year - was there anything done to
address the "uninitialized interpreter" problem?

> if people like it, is this feature something we can squeeze in for
> 2.0?

I think this patch will have most value if applied to 2.0. When 2.1
comes along, many people will have been bitten by this bug, and will
know to avoid it - so it won't do that much good in 2.1.

I'm not looking forward to answering all the help@python.org messages
to explain why Python can't deal with versions properly, so I'd rather
see these people get a nice exception instead of IDLE silently closing
all windows [including those with two hours of unsaved work].

Regards,
Martin

P.S db3l is David Bolen, see http://sourceforge.net/users/db3l.


From tim_one@email.msn.com  Fri Sep 29 08:32:09 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 29 Sep 2000 03:32:09 -0400
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENFHIAA.tim_one@email.msn.com>

[Mark Hammond]
> 	I'd like some feedback on a patch assigned to me.

It's assigned to you only because I'm on vacation now <wink>.

> It is designed to prevent Python extensions built for an earlier
> version of Python from crashing the new version.
>
> I haven't actually tested the patch, but I am sure it works as
> advertised (who is db31 anyway?).

It's sure odd that SF doesn't know!  It's David Bolen; see

http://www.python.org/pipermail/python-list/2000-September/119081.html

> My question relates more to the "style" - the patch locates the new
> .pyd's address in memory, and parses through the MS PE/COFF format,
> locating the import table.  If then scans the import table looking
> for Pythonxx.dll, and compares any found entries with the current
> version.
>
> Quite clever - a definite plus is that is should work for all old and
> future versions (of Python - dunno about Windows ;-) - but do we want
> this sort of code in Python?  Is this sort of hack, however clever,
> going to some back and bite us?

Guido will hate it:  his general rule is that he doesn't want code he
couldn't personally repair if needed, and this code is from Pluto (I hear
that's right next to Redmond, though, so let's not overreact either <wink>).

OTOH, Python goes to extreme lengths to prevent crashes, and my reading of
early c.l.py reports is that the 2.0 DLL incompatibility is going to cause a
lot of crashes out in the field.  People generally don't know squat about
the extension modules they're using -- or sometimes even that they *are*
using some.

> Second related question:  if people like it, is this feature something we
> can squeeze in for 2.0?

Well, it's useless if we don't.  That is, we should bite the bullet and come
up with a principled solution, even if that means extension writers have to
add a few new lines of code or be shunned from the community forever.  But
that won't happen for 2.0.

> If there are no objections to any of this, I am happy to test it and
> check it in - but am not confident of doing so without some feedback.

Guido's out of touch, but I'm on vacation, so he can't yell at me for
encouraging you on my own time.  If it works, I would check it in with the
understanding that we earnestly intend to do whatever it takes to get rid of
this code after 2.0.    It is not a long-term solution, but if it works it's
a very expedient hack.  Hacks suck for us, but letting Python blow up sucks
for users.  So long as I'm on vacation, I side with the users <0.9 wink>.

then-let's-ask-david-to-figure-out-how-to-disable-norton-antivirus-ly
    y'rs  - tim




From thomas.heller@ion-tof.com  Fri Sep 29 08:36:33 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 09:36:33 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <007d01c029e8$00b33570$4500a8c0@thomasnb>

> Hi all,
> I'd like some feedback on a patch assigned to me.  It is designed to
> prevent Python extensions built for an earlier version of Python from
> crashing the new version.
>
> I haven't actually tested the patch, but I am sure it works as advertised
> (who is db31 anyway?).
>
> My question relates more to the "style" - the patch locates the new .pyd's
> address in memory, and parses through the MS PE/COFF format, locating the
> import table.  If then scans the import table looking for Pythonxx.dll,
and
> compares any found entries with the current version.
Shouldn't the win32 api BindImageEx be used? Then you would not have
to know about the PE/COFF format at all. You can install a callback
function which will be called with the dll-names bound.
According to my docs, BindImageEx may not be included in early versions of
Win95, but who is using that anyway?
(Well, ok, what about CE?)

>
> Quite clever - a definite plus is that is should work for all old and
> future versions (of Python - dunno about Windows ;-) - but do we want this
> sort of code in Python?  Is this sort of hack, however clever, going to
> some back and bite us?
>
> Second related question:  if people like it, is this feature something we
> can squeeze in for 2.0?
+1 from me (if I count).

>
> If there are no objections to any of this, I am happy to test it and check
> it in - but am not confident of doing so without some feedback.
>
> Thanks,
>
> Mark.

Thomas



From Fredrik Lundh" <effbot@telia.com  Fri Sep 29 08:53:57 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 09:53:57 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com> <007d01c029e8$00b33570$4500a8c0@thomasnb>
Message-ID: <012401c029ea$6cfbc7e0$766940d5@hagrid>

> According to my docs, BindImageEx may not be included in early versions of
> Win95, but who is using that anyway?

lots of people -- the first version of our PythonWare
installer didn't run on the original Win95 release, and
we still get complaints about that.

on the other hand, it's not that hard to use BindImageEx
only if it exists...

</F>



From mal@lemburg.com  Fri Sep 29 08:54:16 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 09:54:16 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <39D44AA8.926DCF04@lemburg.com>

Tim Peters wrote:
> 
> [Tim]
> > >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> > >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> > >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> > UnicodeError: Unicode-Escape decoding error: Invalid Unicode
>                 Character Name
> 
> [MAL]
> > Perhaps you should submit these for Unicode 4.0 ;-)
> 
> Note that the first two are already there; they just don't have an
> associated numerical value.  The last one was a hint that I was trying to
> write a frivolous msg while giving my "<wink>" key a break <wink>.

That's what I meant: you should submit the numeric values for
the first two and opt for addition of the last.
 
> > But really, I don't suspect that anyone is going to do serious
> > character to number conversion on these esoteric characters. Plain
> > old digits will do just as they always have ...
> 
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

It is needed for Unicode 3.0 standard compliance and for whoever
wants to use this data. Since the Unicode database explicitly
contains fractions, I think adding the .rational() API would
make sense to provide a different access method to this data.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep 29 09:01:57 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:01:57 +0200
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
References: <LNBBLJKPBEHFEDALKOLCEEMHHIAA.tim_one@email.msn.com>
Message-ID: <39D44C75.110D83B6@lemburg.com>

Tim Peters wrote:
> 
> This is from 2.0b2 Windows, and typical:
> 
> C:\Python20>python -v
> # C:\PYTHON20\lib\site.pyc has bad magic
> import site # from C:\PYTHON20\lib\site.py
> # wrote C:\PYTHON20\lib\site.pyc
> # C:\PYTHON20\lib\os.pyc has bad magic
> import os # from C:\PYTHON20\lib\os.py
> # wrote C:\PYTHON20\lib\os.pyc
> import nt # builtin
> # C:\PYTHON20\lib\ntpath.pyc has bad magic
> import ntpath # from C:\PYTHON20\lib\ntpath.py
> # wrote C:\PYTHON20\lib\ntpath.pyc
> # C:\PYTHON20\lib\stat.pyc has bad magic
> import stat # from C:\PYTHON20\lib\stat.py
> # wrote C:\PYTHON20\lib\stat.pyc
> # C:\PYTHON20\lib\string.pyc has bad magic
> import string # from C:\PYTHON20\lib\string.py
> # wrote C:\PYTHON20\lib\string.pyc
> import strop # builtin
> # C:\PYTHON20\lib\UserDict.pyc has bad magic
> import UserDict # from C:\PYTHON20\lib\UserDict.py
> # wrote C:\PYTHON20\lib\UserDict.pyc
> Python 2.0b2 (#6, Sep 26 2000, 14:59:21) [MSC 32 bit (Intel)] on win32
> Type "copyright", "credits" or "license" for more information.
> >>>
> 
> That is, .pyc's don't work at all anymore on Windows:  Python *always*
> thinks they have a bad magic number.  Elsewhere?

FYI, it works just fine on Linux on i586.

--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep 29 09:13:34 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:13:34 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com> <007301c029e3$612e1960$766940d5@hagrid>
Message-ID: <39D44F2E.14701980@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > But really, I don't suspect that anyone is going to do serious
> > > character to number conversion on these esoteric characters. Plain
> > > old digits will do just as they always have ...
> >
> > Which is why I have to wonder whether there's *any* value in exposing the
> > numeric-value property beyond regular old digits.
> 
> the unicode database has three fields dealing with the numeric
> value: decimal digit value (integer), digit value (integer), and
> numeric value (integer *or* rational):
> 
>     "This is a numeric field. If the character has the numeric
>     property, as specified in Chapter 4 of the Unicode Standard,
>     the value of that character is represented with an integer or
>     rational number in this field."
> 
> here's today's proposal: let's claim that it's a bug to return a float
> from "numeric", and change it to return a string instead.

Hmm, how about making the return format an option ?

unicodedata.numeric(char, format=('float' (default), 'string', 'fraction'))
 
> (this will match "decomposition", which is also "broken" -- it really
> should return a tag followed by a sequence of unicode characters).

Same here:

unicodedata.decomposition(char, format=('string' (default), 
                                        'tuple'))

I'd opt for making the API more customizable rather than trying
to find the one and only true return format ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas.heller@ion-tof.com  Fri Sep 29 09:48:51 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 10:48:51 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com> <007d01c029e8$00b33570$4500a8c0@thomasnb> <012401c029ea$6cfbc7e0$766940d5@hagrid>
Message-ID: <001601c029f2$1aa72540$4500a8c0@thomasnb>

> > According to my docs, BindImageEx may not be included in early versions
of
> > Win95, but who is using that anyway?
>
> lots of people -- the first version of our PythonWare
> installer didn't run on the original Win95 release, and
> we still get complaints about that.
>

Requirements
  Windows NT/2000: Requires Windows NT 4.0 or later.
  Windows 95/98: Requires Windows 95 or later. Available as a
redistributable for Windows 95.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  Header: Declared in Imagehlp.h.
  Library: Use Imagehlp.lib.

> on the other hand, it's not that hard to use BindImageEx
> only if it exists...
>

Thomas



From tim_one@email.msn.com  Fri Sep 29 10:02:38 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 29 Sep 2000 05:02:38 -0400
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <012401c029ea$6cfbc7e0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENKHIAA.tim_one@email.msn.com>

[Thomas Heller]
> According to my docs, BindImageEx may not be included in early
> versions of Win95, but who is using that anyway?

[/F]
> lots of people -- the first version of our PythonWare
> installer didn't run on the original Win95 release, and
> we still get complaints about that.

Indeed, you got one from me <wink>!

> on the other hand, it's not that hard to use BindImageEx
> only if it exists...

I'm *really* going on vacation now, but if BindImageEx makes sense here
(offhand I confess the intended use of it here didn't click for me), MS's
imagehlp.dll is redistributable -- although it appears they split it into
two DLLs for Win2K and made only "the other one" redistributable there
<arghghghgh> ...




From thomas.heller@ion-tof.com  Fri Sep 29 10:15:27 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 11:15:27 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <LNBBLJKPBEHFEDALKOLCCENKHIAA.tim_one@email.msn.com>
Message-ID: <002e01c029f5$d24dbc10$4500a8c0@thomasnb>

> I'm *really* going on vacation now, but if BindImageEx makes sense here
> (offhand I confess the intended use of it here didn't click for me), MS's
> imagehlp.dll is redistributable -- although it appears they split it into
> two DLLs for Win2K and made only "the other one" redistributable there
> <arghghghgh> ...

No need to install it on Win2K (may not even be possible?),
only for Win95.

I just checked: imagehlp.dll is NOT included in Win95b (which I still
use on one computer, but I thought I was in a small minority)

Thomas



From jeremy@beopen.com  Fri Sep 29 15:09:16 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 29 Sep 2000 10:09:16 -0400 (EDT)
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
In-Reply-To: <045201c029c9$8f49fd10$8119fea9@neil>
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
 <045201c029c9$8f49fd10$8119fea9@neil>
Message-ID: <14804.41612.747364.118819@bitdiddle.concentric.net>

>>>>> "NH" == Neil Hodgson <nhodgson@bigpond.net.au> writes:

  NH>    Finds about 130 characters. The only ones I feel are worth
  NH>    worrying about
  NH> are the half, quarters and eighths (0xbc, 0xbd, 0xbe, 0x215b,
  NH> 0x215c, 0x215d, 0x215e) which are commonly used for expressing
  NH> the prices of stocks and commodities in the US. This may be
  NH> rarely used but it is better to have it available than to have
  NH> people coding up their own translation tables.

The US no longer uses fraction to report stock prices.  Example:
    http://business.nytimes.com/market_summary.asp

LEADERS                            Last      Range         Change    
AMERICAN INDL PPTYS REIT  (IND)   14.06  13.56  - 14.06  0.25  / 1.81% 
R G S ENERGY GROUP INC  (RGS)     28.19  27.50  - 28.19  0.50  / 1.81% 
DRESDNER RCM GLBL STRT INC  (DSF)  6.63   6.63  - 6.63   0.06  / 0.95% 
FALCON PRODS INC  (FCP)            9.63   9.63  - 9.88   0.06  / 0.65% 
GENERAL ELEC CO  (GE)             59.00  58.63  - 59.75  0.19  / 0.32% 

Jeremy


From trentm@ActiveState.com  Fri Sep 29 15:56:34 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 29 Sep 2000 07:56:34 -0700
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Sep 28, 2000 at 10:25:00PM -0400
References: <200009270706.AAA21107@slayer.i.sourceforge.net> <20000927003233.C19872@ActiveState.com> <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>
Message-ID: <20000929075634.B15762@ActiveState.com>

On Thu, Sep 28, 2000 at 10:25:00PM -0400, Fred L. Drake, Jr. wrote:
> 
> Trent Mick writes:
>  > I was playing with a different SourceForge project and I screwed up my
>  > CVSROOT (used Python's instead). Sorry SOrry!
> 
>   Well, you blew it.  Don't worry, we'll have you kicked off
> SourceForge in no time!  ;)
>   Well, maybe not.  I've submitted a support request to fix this:
> 
> http://sourceforge.net/support/?func=detailsupport&support_id=106112&group_id=1
> 
> 

Thank you Fred!


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Fri Sep 29 16:00:17 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 29 Sep 2000 08:00:17 -0700
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14804.12649.504962.985774@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Sep 29, 2000 at 02:06:33AM -0400
References: <200009270706.AAA21107@slayer.i.sourceforge.net> <20000927003233.C19872@ActiveState.com> <14804.12649.504962.985774@anthem.concentric.net>
Message-ID: <20000929080017.C15762@ActiveState.com>

On Fri, Sep 29, 2000 at 02:06:33AM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:
> 
>     TM> I was playing with a different SourceForge project and I
>     TM> screwed up my CVSROOT (used Python's instead). Sorry SOrry!
> 
>     TM> How do I undo this cleanly? I could 'cvs remove' the
>     TM> README.txt file but that would still leave the top-level
>     TM> 'black/' turd right? Do the SourceForge admin guys have to
>     TM> manually kill the 'black' directory in the repository?
> 
> One a directory's been added, it's nearly impossible to cleanly delete
> it from CVS.  If it's infected people's working directories, you're
> really screwed, because even if the SF admins remove it from the
> repository, it'll be a pain to clean up on the client side.

Hopefully no client machines were infected. People would have to 'cvs co
black' with the Python CVSROOT. I presume people are only doing either 'cvs
co python'or 'cvs co distutils'. ...or is there some sort of 'cvs co *' type
invocation that people could and were using?



> 
> Probably best thing to do is make sure you "cvs rm" everything in the
> directory and then just let "cvs up -P" remove the empty directory.
> Everybody /is/ using -P (and -d) right? :)
>

I didn't know about -P, but I will use it now. For reference for others:

       -P     Prune  (remove)  directories that are empty after being
              updated, on checkout, or update.  Normally, an empty directory
              (one that is void  of revision-con­ trolled  files) is left
              alone.  Specifying -P will cause these directories to be
              silently removed from your checked-out sources.  This does not
              remove  the directory  from  the  repository, only from your
              checked out copy.  Note that this option is implied by the -r
              or -D options of checkout and export.


Trent


-- 
Trent Mick
TrentM@ActiveState.com


From bwarsaw@beopen.com  Fri Sep 29 16:12:29 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 29 Sep 2000 11:12:29 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
 <20000927003233.C19872@ActiveState.com>
 <14804.12649.504962.985774@anthem.concentric.net>
 <20000929080017.C15762@ActiveState.com>
Message-ID: <14804.45405.528913.613816@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:

    TM> Hopefully no client machines were infected. People would have
    TM> to 'cvs co black' with the Python CVSROOT. I presume people
    TM> are only doing either 'cvs co python'or 'cvs co
    TM> distutils'. ...or is there some sort of 'cvs co *' type
    TM> invocation that people could and were using?

In fact, I usually only "co -d python python/dist/src" :)  But if you
do a "cvs up -d" at the top-level, I think you'll get the new
directory.  Don't know how many people that'll affect, but if you're
going to wax that the directory, the soon the better!

-Barry


From fdrake@beopen.com  Fri Sep 29 16:21:48 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 29 Sep 2000 11:21:48 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14804.12649.504962.985774@anthem.concentric.net>
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
 <20000927003233.C19872@ActiveState.com>
 <14804.12649.504962.985774@anthem.concentric.net>
Message-ID: <14804.45964.428895.57625@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > One a directory's been added, it's nearly impossible to cleanly delete
 > it from CVS.  If it's infected people's working directories, you're
 > really screwed, because even if the SF admins remove it from the
 > repository, it'll be a pain to clean up on the client side.

  In general, yes, but since the directory was a separate module (in
CVS terms, "product" in SF terms), there's no way for it to have been
picked up by clients automatically.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Fri Sep 29 17:15:09 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 29 Sep 2000 12:15:09 -0400 (EDT)
Subject: [Python-Dev] codecs question
Message-ID: <14804.49165.894978.144346@cj42289-a.reston1.va.home.com>

  Jeremy was just playing with the xml.sax package, and decided to
print the string returned from parsing "&#251;" (the copyright
symbol).  Sure enough, he got a traceback:

>>> print u'\251'

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
UnicodeError: ASCII encoding error: ordinal not in range(128)

and asked me about it.  I was a little surprised myself.  First, that
anyone would use "print" in a SAX handler to start with, and second,
that it was so painful.
  Now, I can chalk this up to not using a reasonable stdout that
understands that Unicode needs to be translated to Latin-1 given my
font selection.  So I looked at the codecs module to provide a usable
output stream.  The EncodedFile class provides a nice wrapper around
another file object, and supports both encoding both ways.
  Unfortunately, I can't see what "encoding" I should use if I want to
read & write Unicode string objects to it.  ;(  (Marc-Andre, please
tell me I've missed something!)  I also don't think I
can use it with "print", extended or otherwise.
  The PRINT_ITEM opcode calls PyFile_WriteObject() with whatever it
gets, so that's fine.  Then it converts the object using
PyObject_Str() or PyObject_Repr().  For Unicode objects, the tp_str
handler attempts conversion to the default encoding ("ascii" in this
case), and raises the traceback we see above.
  Perhaps a little extra work is needed in PyFile_WriteObject() to
allow Unicode objects to pass through if the file is merely file-like,
and let the next layer handle the conversion?  This would probably
break code, and therefore not be acceptable.
  On the other hand, it's annoying that I can't create a file-object
that takes Unicode strings from "print", and doesn't seem intuitive.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From loewis@informatik.hu-berlin.de  Fri Sep 29 18:16:25 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Fri, 29 Sep 2000 19:16:25 +0200 (MET DST)
Subject: [Python-Dev] codecs question
Message-ID: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>

>   Unfortunately, I can't see what "encoding" I should use if I want
>   to read & write Unicode string objects to it.  ;( (Marc-Andre,
>   please tell me I've missed something!)

It depends on the output you want to have. One option would be

s=codecs.lookup('unicode-escape')[3](sys.stdout)

Then, s.write(u'\251') prints a string in Python quoting notation.

Unfortunately,

print >>s,u'\251'

won't work, since print *first* tries to convert the argument to a
string, and then prints the string onto the stream.

>  On the other hand, it's annoying that I can't create a file-object
> that takes Unicode strings from "print", and doesn't seem intuitive.

Since you are asking for a hack :-) How about having an additional
letter of 'u' in the "mode" attribute of a file object?

Then, print would be

def print(stream,string):
  if type(string) == UnicodeType:
    if 'u' in stream.mode:
      stream.write(string)
      return
  stream.write(str(string))

The Stream readers and writers would then need to have a mode or 'ru'
or 'wu', respectively.

Any other protocol to signal unicode-awareness in a stream might do as
well.

Regards,
Martin

P.S. Is there some function to retrieve the UCN names from ucnhash.c?


From mal@lemburg.com  Fri Sep 29 19:08:26 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 20:08:26 +0200
Subject: [Python-Dev] codecs question
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>
Message-ID: <39D4DA99.53338FA5@lemburg.com>

Martin von Loewis wrote:
> 
> P.S. Is there some function to retrieve the UCN names from ucnhash.c?

No, there's not even a way to extract those names... a table is
there (_Py_UnicodeCharacterName in ucnhash.c), but no access
function.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep 29 19:09:13 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 20:09:13 +0200
Subject: [Python-Dev] codecs question
References: <14804.49165.894978.144346@cj42289-a.reston1.va.home.com>
Message-ID: <39D4DAC9.7F8E1CE5@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   Jeremy was just playing with the xml.sax package, and decided to
> print the string returned from parsing "&#251;" (the copyright
> symbol).  Sure enough, he got a traceback:
> 
> >>> print u'\251'
> 
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> UnicodeError: ASCII encoding error: ordinal not in range(128)
> 
> and asked me about it.  I was a little surprised myself.  First, that
> anyone would use "print" in a SAX handler to start with, and second,
> that it was so painful.

That's a consequence of defaulting to ASCII for all platforms
instead of choosing the encoding depending on the current locale
(the site.py file has code which does the latter).

>   Now, I can chalk this up to not using a reasonable stdout that
> understands that Unicode needs to be translated to Latin-1 given my
> font selection.  So I looked at the codecs module to provide a usable
> output stream.  The EncodedFile class provides a nice wrapper around
> another file object, and supports both encoding both ways.
>   Unfortunately, I can't see what "encoding" I should use if I want to
> read & write Unicode string objects to it.  ;(  (Marc-Andre, please
> tell me I've missed something!) 

That depends on what you want to see as output ;-) E.g. in
Europe you'd use Latin-1 (which also contains the copyright
symbol).

> I also don't think I
> can use it with "print", extended or otherwise.
>   The PRINT_ITEM opcode calls PyFile_WriteObject() with whatever it
> gets, so that's fine.  Then it converts the object using
> PyObject_Str() or PyObject_Repr().  For Unicode objects, the tp_str
> handler attempts conversion to the default encoding ("ascii" in this
> case), and raises the traceback we see above.

Right.

>   Perhaps a little extra work is needed in PyFile_WriteObject() to
> allow Unicode objects to pass through if the file is merely file-like,
> and let the next layer handle the conversion?  This would probably
> break code, and therefore not be acceptable.
>   On the other hand, it's annoying that I can't create a file-object
> that takes Unicode strings from "print", and doesn't seem intuitive.

The problem is that the .write() method of a file-like object
will most probably only work with string objects. If
it uses "s#" or "t#" it's lucky, because then the argument
parser will apply the necessariy magic to the input object
to get out some object ready for writing to the file. Otherwise
it will simply fail with a type error.

Simply allowing PyObject_Str() to return Unicode objects too
is not an alternative either since that would certainly break
tons of code.

Implementing tp_print for Unicode wouldn't get us anything
either.

Perhaps we'll need to fix PyFile_WriteObject() to special
case Unicode and allow calling .write() with an Unicode
object and fix those .write() methods which don't do the
right thing ?!

This is a project for 2.1. In 2.0 only explicitly calling
the .write() method will do the trick and EncodedFile()
helps with this.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Fredrik Lundh" <effbot@telia.com  Fri Sep 29 19:28:38 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 20:28:38 +0200
Subject: [Python-Dev] codecs question
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>
Message-ID: <000001c02a47$f3f5f100$766940d5@hagrid>

> P.S. Is there some function to retrieve the UCN names from ucnhash.c?

the "unicodenames" patch (which replaces ucnhash) includes this
functionality -- but with a little distance, I think it's better to add
it to the unicodedata module.

(it's included in the step 4 patch, soon to be posted to a patch
manager near you...)

</F>



From loewis@informatik.hu-berlin.de  Sat Sep 30 10:47:01 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 30 Sep 2000 11:47:01 +0200 (MET DST)
Subject: [Python-Dev] codecs question
In-Reply-To: <000001c02a47$f3f5f100$766940d5@hagrid> (effbot@telia.com)
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de> <000001c02a47$f3f5f100$766940d5@hagrid>
Message-ID: <200009300947.LAA13652@pandora.informatik.hu-berlin.de>

> the "unicodenames" patch (which replaces ucnhash) includes this
> functionality -- but with a little distance, I think it's better to add
> it to the unicodedata module.
>=20
> (it's included in the step 4 patch, soon to be posted to a patch
> manager near you...)

Sounds good. Is there any chance to use this in codecs, then?
I'm thinking of

>>> print u"\N{COPYRIGHT SIGN}".encode("ascii-ucn")
\N{COPYRIGHT SIGN}
>>> print u"\N{COPYRIGHT SIGN}".encode("latin-1-ucn")
=A9

Regards,
Martin

P.S. Some people will recognize this as the disguised question 'how
can I convert non-convertable characters using the XML entity
notation?'


From mal@lemburg.com  Sat Sep 30 11:21:43 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 30 Sep 2000 12:21:43 +0200
Subject: [Python-Dev] codecs question
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de> <000001c02a47$f3f5f100$766940d5@hagrid> <200009300947.LAA13652@pandora.informatik.hu-berlin.de>
Message-ID: <39D5BEB7.F4045E8B@lemburg.com>

Martin von Loewis wrote:
> 
> > the "unicodenames" patch (which replaces ucnhash) includes this
> > functionality -- but with a little distance, I think it's better to add
> > it to the unicodedata module.
> >
> > (it's included in the step 4 patch, soon to be posted to a patch
> > manager near you...)
> 
> Sounds good. Is there any chance to use this in codecs, then?

If you need speed, you'd have to write a C codec for this
and yes: the ucnhash module does import a C API using a
PyCObject which you can use to access the static C data
table.

Don't know if Fredrik's version will also support this.

I think a C function as access method would be more generic
than the current direct C table access.

> I'm thinking of
> 
> >>> print u"\N{COPYRIGHT SIGN}".encode("ascii-ucn")
> \N{COPYRIGHT SIGN}
> >>> print u"\N{COPYRIGHT SIGN}".encode("latin-1-ucn")
> ©
> 
> Regards,
> Martin
> 
> P.S. Some people will recognize this as the disguised question 'how
> can I convert non-convertable characters using the XML entity
> notation?'

If you just need a single encoding, e.g. Latin-1, simply clone
the codec (it's coded in unicodeobject.c) and add the XML entity
processing.

Unfortunately, reusing the existing codecs is not too
efficient: the reason is that there is no error handling
which would permit you to say "encode as far as you can
and then return the encoded data plus a position marker
in the input stream/data".

Perhaps we should add a new standard error handling
scheme "break" which simply stops encoding/decoding
whenever an error occurrs ?!

This should then allow reusing existing codecs by
processing the input in slices.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Fri Sep 29 09:15:18 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:15:18 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com> <045201c029c9$8f49fd10$8119fea9@neil>
Message-ID: <39D44F96.D4342ADB@lemburg.com>

Neil Hodgson wrote:
> 
>    The 0x302* 'Hangzhou' numerals look like they should be classified as
> digits.

Can't change the Unicode 3.0 database... so even though this might
be useful in some contexts lets stick to the standard.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido@python.org  Sat Sep 30 21:56:18 2000
From: guido@python.org (Guido van Rossum)
Date: Sat, 30 Sep 2000 15:56:18 -0500
Subject: [Python-Dev] Changes in semantics to str()?
Message-ID: <200009302056.PAA14718@cj20424-a.reston1.va.home.com>

When we changed floats to behave different on repr() than on str(), we
briefly discussed changes to the container objects as well, but
nothing came of it.

Currently, str() of a tuple, list or dictionary is the same as repr()
of those objects.  This is not very consistent.  For example, when we
have a float like 1.1 which can't be represented exactly, str() yields
"1.1" but repr() yields "1.1000000000000001".  But if we place the
same number in a list, it doesn't matter which function we use: we
always get "[1.1000000000000001]".

Below I have included changes to listobject.c, tupleobject.c and
dictobject.c that fix this.  The fixes change the print and str()
callbacks for these objects to use PyObject_Str() on the contained
items -- except if the item is a string or Unicode string.  I made
these exceptions because I don't like the idea of str(["abc"])
yielding [abc] -- I'm too used to the idea of seeing ['abc'] here.
And str() of a Unicode object fails when it contains non-ASCII
characters, so that's no good either -- it would break too much code.

Is it too late to check this in?  Another negative consequence would
be that for user-defined or 3rd party extension objects that have
different repr() and str(), like NumPy arrays, it might break some
code -- but I think this is not very likely.

--Guido van Rossum (home page: http://www.python.org/~guido/)

*** dictobject.c	2000/09/01 23:29:27	2.65
--- dictobject.c	2000/09/30 16:03:04
***************
*** 594,599 ****
--- 594,601 ----
  	register int i;
  	register int any;
  	register dictentry *ep;
+ 	PyObject *item;
+ 	int itemflags;
  
  	i = Py_ReprEnter((PyObject*)mp);
  	if (i != 0) {
***************
*** 609,620 ****
  		if (ep->me_value != NULL) {
  			if (any++ > 0)
  				fprintf(fp, ", ");
! 			if (PyObject_Print((PyObject *)ep->me_key, fp, 0)!=0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
  			fprintf(fp, ": ");
! 			if (PyObject_Print(ep->me_value, fp, 0) != 0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
--- 611,630 ----
  		if (ep->me_value != NULL) {
  			if (any++ > 0)
  				fprintf(fp, ", ");
! 			item = (PyObject *)ep->me_key;
! 			itemflags = flags;
! 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 				itemflags = 0;
! 			if (PyObject_Print(item, fp, itemflags)!=0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
  			fprintf(fp, ": ");
! 			item = ep->me_value;
! 			itemflags = flags;
! 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 				itemflags = 0;
! 			if (PyObject_Print(item, fp, itemflags) != 0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
***************
*** 661,666 ****
--- 671,722 ----
  	return v;
  }
  
+ static PyObject *
+ dict_str(dictobject *mp)
+ {
+ 	auto PyObject *v;
+ 	PyObject *sepa, *colon, *item, *repr;
+ 	register int i;
+ 	register int any;
+ 	register dictentry *ep;
+ 
+ 	i = Py_ReprEnter((PyObject*)mp);
+ 	if (i != 0) {
+ 		if (i > 0)
+ 			return PyString_FromString("{...}");
+ 		return NULL;
+ 	}
+ 
+ 	v = PyString_FromString("{");
+ 	sepa = PyString_FromString(", ");
+ 	colon = PyString_FromString(": ");
+ 	any = 0;
+ 	for (i = 0, ep = mp->ma_table; i < mp->ma_size && v; i++, ep++) {
+ 		if (ep->me_value != NULL) {
+ 			if (any++)
+ 				PyString_Concat(&v, sepa);
+ 			item = ep->me_key;
+ 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 				repr = PyObject_Repr(item);
+ 			else
+ 				repr = PyObject_Str(item);
+ 			PyString_ConcatAndDel(&v, repr);
+ 			PyString_Concat(&v, colon);
+ 			item = ep->me_value;
+ 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 				repr = PyObject_Repr(item);
+ 			else
+ 				repr = PyObject_Str(item);
+ 			PyString_ConcatAndDel(&v, repr);
+ 		}
+ 	}
+ 	PyString_ConcatAndDel(&v, PyString_FromString("}"));
+ 	Py_ReprLeave((PyObject*)mp);
+ 	Py_XDECREF(sepa);
+ 	Py_XDECREF(colon);
+ 	return v;
+ }
+ 
  static int
  dict_length(dictobject *mp)
  {
***************
*** 1193,1199 ****
  	&dict_as_mapping,	/*tp_as_mapping*/
  	0,		/* tp_hash */
  	0,		/* tp_call */
! 	0,		/* tp_str */
  	0,		/* tp_getattro */
  	0,		/* tp_setattro */
  	0,		/* tp_as_buffer */
--- 1249,1255 ----
  	&dict_as_mapping,	/*tp_as_mapping*/
  	0,		/* tp_hash */
  	0,		/* tp_call */
! 	(reprfunc)dict_str, /* tp_str */
  	0,		/* tp_getattro */
  	0,		/* tp_setattro */
  	0,		/* tp_as_buffer */
Index: listobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/listobject.c,v
retrieving revision 2.88
diff -c -r2.88 listobject.c
*** listobject.c	2000/09/26 05:46:01	2.88
--- listobject.c	2000/09/30 16:03:04
***************
*** 197,203 ****
  static int
  list_print(PyListObject *op, FILE *fp, int flags)
  {
! 	int i;
  
  	i = Py_ReprEnter((PyObject*)op);
  	if (i != 0) {
--- 197,204 ----
  static int
  list_print(PyListObject *op, FILE *fp, int flags)
  {
! 	int i, itemflags;
! 	PyObject *item;
  
  	i = Py_ReprEnter((PyObject*)op);
  	if (i != 0) {
***************
*** 210,216 ****
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		if (PyObject_Print(op->ob_item[i], fp, 0) != 0) {
  			Py_ReprLeave((PyObject *)op);
  			return -1;
  		}
--- 211,221 ----
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		item = op->ob_item[i];
! 		itemflags = flags;
! 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 			itemflags = 0;
! 		if (PyObject_Print(item, fp, itemflags) != 0) {
  			Py_ReprLeave((PyObject *)op);
  			return -1;
  		}
***************
*** 245,250 ****
--- 250,285 ----
  	return s;
  }
  
+ static PyObject *
+ list_str(PyListObject *v)
+ {
+ 	PyObject *s, *comma, *item, *repr;
+ 	int i;
+ 
+ 	i = Py_ReprEnter((PyObject*)v);
+ 	if (i != 0) {
+ 		if (i > 0)
+ 			return PyString_FromString("[...]");
+ 		return NULL;
+ 	}
+ 	s = PyString_FromString("[");
+ 	comma = PyString_FromString(", ");
+ 	for (i = 0; i < v->ob_size && s != NULL; i++) {
+ 		if (i > 0)
+ 			PyString_Concat(&s, comma);
+ 		item = v->ob_item[i];
+ 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 			repr = PyObject_Repr(item);
+ 		else
+ 			repr = PyObject_Str(item);
+ 		PyString_ConcatAndDel(&s, repr);
+ 	}
+ 	Py_XDECREF(comma);
+ 	PyString_ConcatAndDel(&s, PyString_FromString("]"));
+ 	Py_ReprLeave((PyObject *)v);
+ 	return s;
+ }
+ 
  static int
  list_compare(PyListObject *v, PyListObject *w)
  {
***************
*** 1484,1490 ****
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 1519,1525 ----
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)list_str, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
***************
*** 1561,1567 ****
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 1596,1602 ----
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)list_str, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
Index: tupleobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/tupleobject.c,v
retrieving revision 2.46
diff -c -r2.46 tupleobject.c
*** tupleobject.c	2000/09/15 07:32:39	2.46
--- tupleobject.c	2000/09/30 16:03:04
***************
*** 167,178 ****
  static int
  tupleprint(PyTupleObject *op, FILE *fp, int flags)
  {
! 	int i;
  	fprintf(fp, "(");
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		if (PyObject_Print(op->ob_item[i], fp, 0) != 0)
  			return -1;
  	}
  	if (op->ob_size == 1)
--- 167,183 ----
  static int
  tupleprint(PyTupleObject *op, FILE *fp, int flags)
  {
! 	int i, itemflags;
! 	PyObject *item;
  	fprintf(fp, "(");
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		item = op->ob_item[i];
! 		itemflags = flags;
! 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 			itemflags = 0;
! 		if (PyObject_Print(item, fp, itemflags) != 0)
  			return -1;
  	}
  	if (op->ob_size == 1)
***************
*** 200,205 ****
--- 205,234 ----
  	return s;
  }
  
+ static PyObject *
+ tuplestr(PyTupleObject *v)
+ {
+ 	PyObject *s, *comma, *item, *repr;
+ 	int i;
+ 	s = PyString_FromString("(");
+ 	comma = PyString_FromString(", ");
+ 	for (i = 0; i < v->ob_size && s != NULL; i++) {
+ 		if (i > 0)
+ 			PyString_Concat(&s, comma);
+ 		item = v->ob_item[i];
+ 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 			repr = PyObject_Repr(item);
+ 		else
+ 			repr = PyObject_Str(item);
+ 		PyString_ConcatAndDel(&s, repr);
+ 	}
+ 	Py_DECREF(comma);
+ 	if (v->ob_size == 1)
+ 		PyString_ConcatAndDel(&s, PyString_FromString(","));
+ 	PyString_ConcatAndDel(&s, PyString_FromString(")"));
+ 	return s;
+ }
+ 
  static int
  tuplecompare(register PyTupleObject *v, register PyTupleObject *w)
  {
***************
*** 412,418 ****
  	0,		/*tp_as_mapping*/
  	(hashfunc)tuplehash, /*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 441,447 ----
  	0,		/*tp_as_mapping*/
  	(hashfunc)tuplehash, /*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)tuplestr, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/


From fdrake at beopen.com  Fri Sep  1 00:01:41 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 18:01:41 -0400 (EDT)
Subject: [Python-Dev] Syntax error in Makefile for "make install"
In-Reply-To: <39AED489.F953E9EE@per.dem.csiro.au>
References: <39AED489.F953E9EE@per.dem.csiro.au>
Message-ID: <14766.54725.466043.196080@cj42289-a.reston1.va.home.com>

Mark Favas writes:
 > Makefile in the libainstall target of "make install" uses the following
 > construct:
 >                 @if [ "$(MACHDEP)" == "beos" ] ; then \
 > This "==" is illegal in all the /bin/sh's I have lying around, and leads
 > to make failing with:
 > /bin/sh: test: unknown operator ==
 > make: *** [libainstall] Error 1

  Fixed; thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From m.favas at per.dem.csiro.au  Fri Sep  1 00:29:47 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 06:29:47 +0800
Subject: [Python-Dev] Namespace collision between lib/xml and site-packages/xml
Message-ID: <39AEDC5B.333F737E@per.dem.csiro.au>

On July 26 I reported that the new xml package in the standard library
collides with and overrides the xml package from the xml-sig that may be
installed in site-packages. This is still the case. The new package does
not have the same functionality as the one in site-packages, and hence
my application (and others relying on similar functionality) gets an
import error. I understood that it was planned that the new library xml
package would check for the site-package version, and transparently hand
over to it if it existed. It's not really an option to remove/rename the
xml package in the std lib, or to break existing xml-based code...

Of course, this might be fixed by 2.0b1, or is it a feature that will be
frozen out <wry smile>?

Fred's response was:
"  I expect we'll be making the package in site-packages an extension
provider for the xml package in the standard library.  I'm planning to
discuss this issue at today's PythonLabs meeting." 
-- 
Mark



From ping at lfw.org  Fri Sep  1 01:16:55 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 18:16:55 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.50976.102853.695767@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Charles G Waldman wrote:
> Alas, even after fixing this, I *still* can't get linuxaudiodev to
> play the damned .au file.  It works fine for the .wav formats.
> 
> I'll continue hacking on this as time permits.

Just so you know -- i was definitely able to get this to work at
some point before when we were trying to fix this.  I changed
test_linuxaudiodev and it played the .AU file correctly.  I haven't
had time to survey what the state of the various modules is now,
though -- i'll have a look around and see what's going on.

Side note: is there a well-defined platform-independent sound
interface we should be conforming to?  It would be nice to have a
single Python function for each of the following things:

    1. Play a .wav file given its filename.

    2. Play a .au file given its filename.

    3. Play some raw audio data, given a string of bytes and a
       sampling rate.

which would work on as many platforms as possible with the same command.

A quick glance at audiodev.py shows that it seems to support only
Sun and SGI.  Should it be extended?

If someone's already in charge of this and knows what's up, let me know.
I'm sorry if this is common knowledge of which i was just unaware.



-- ?!ng




From effbot at telia.com  Fri Sep  1 00:47:03 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:47:03 +0200
Subject: [Python-Dev] threadmodule.c comment error? (from comp.lang.python)
Message-ID: <00d001c0139d$7be87900$766940d5@hagrid>

as noted by curtis jensen over at comp.lang.python:

the parse tuple string doesn't quite match the error message
given if the 2nd argument isn't a tuple.  on the other hand, the
args argument is initialized to NULL...

thread_PyThread_start_new_thread(PyObject *self, PyObject *fargs)
{
 PyObject *func, *args = NULL, *keyw = NULL;
 struct bootstate *boot;

 if (!PyArg_ParseTuple(fargs, "OO|O:start_new_thread", &func, &args, &keyw))
  return NULL;
 if (!PyCallable_Check(func)) {
  PyErr_SetString(PyExc_TypeError,
    "first arg must be callable");
  return NULL;
 }
 if (!PyTuple_Check(args)) {
  PyErr_SetString(PyExc_TypeError,
    "optional 2nd arg must be a tuple");
  return NULL;
 }
 if (keyw != NULL && !PyDict_Check(keyw)) {
  PyErr_SetString(PyExc_TypeError,
    "optional 3rd arg must be a dictionary");
  return NULL;
 }

what's the right way to fix this? (change the error message
and remove the initialization, or change the parsetuple string
and the tuple check)

</F>




From effbot at telia.com  Fri Sep  1 00:30:23 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:30:23 +0200
Subject: [Python-Dev] one last SRE headache
References: <LNBBLJKPBEHFEDALKOLCEEELHDAA.tim_one@email.msn.com>
Message-ID: <009301c0139b$0ea31000$766940d5@hagrid>

tim:

> [/F]
> > I had to add one rule:
> >
> >     If it starts with a zero, it's always an octal number.
> >     Up to two more octal digits are accepted after the
> >     leading zero.
> >
> > but this still fails on this pattern:
> >
> >     r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'
> >
> > where the last part is supposed to be a reference to
> > group 11, followed by a literal '9'.
> 
> But 9 isn't an octal digit, so it fits w/ your new rule just fine.

last time I checked, "1" wasn't a valid zero.

but nevermind; I think I've figured it out (see other mail)

</F>




From effbot at telia.com  Fri Sep  1 00:28:40 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:28:40 +0200
Subject: [Python-Dev] one last SRE headache
References: <LNBBLJKPBEHFEDALKOLCEEEIHDAA.tim_one@email.msn.com>
Message-ID: <008701c0139a$d1619ae0$766940d5@hagrid>

tim peters:
> The PRE documentation expresses the true intent:
> 
>     \number
>     Matches the contents of the group of the same number. Groups
>     are numbered starting from 1. For example, (.+) \1 matches 'the the'
>     or '55 55', but not 'the end' (note the space after the group). This
>     special sequence can only be used to match one of the first 99 groups.
>     If the first digit of number is 0, or number is 3 octal digits long,
>     it will not be interpreted as a group match, but as the character with
>     octal value number.

yeah, I've read that.  clear as coffee.

but looking at again, I suppose that the right way to
implement this is (doing the tests in the given order):

    if it starts with zero, it's an octal escape
    (1 or 2 octal digits may follow)

    if it starts with an octal digit, AND is followed
    by two other octal digits, it's an octal escape

    if it starts with any digit, it's a reference
    (1 extra decimal digit may follow)

oh well.  too bad my scanner only provides a one-character
lookahead...

</F>




From bwarsaw at beopen.com  Fri Sep  1 01:22:53 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 19:22:53 -0400 (EDT)
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AEBD4A.55ABED9E@per.dem.csiro.au>
	<39AE07FF.478F413@per.dem.csiro.au>
	<14766.14278.609327.610929@anthem.concentric.net>
	<39AEBD01.601F7A83@per.dem.csiro.au>
Message-ID: <14766.59597.713039.633184@anthem.concentric.net>

>>>>> "MF" == Mark Favas <m.favas at per.dem.csiro.au> writes:

    MF> Close, but no cigar - fixes the miscalculation of BE_MAGIC,
    MF> but "magic" is still read from the .mo file as
    MF> 0xffffffff950412de (the 64-bit rep of the 32-bit negative
    MF> integer 0x950412de)

Thanks to a quick chat with Tim, who is always quick to grasp the meat
of the issue, we realize we need to & 0xffffffff all the 32 bit
unsigned ints we're reading out of the .mo files.  I'll work out a
patch, and check it in after a test on 32-bit Linux.  Watch for it,
and please try it out on your box.

Thanks,
-Barry



From bwarsaw at beopen.com  Fri Sep  1 00:12:23 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 18:12:23 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules Makefile.pre.in,1.64,1.65
References: <200008312153.OAA03214@slayer.i.sourceforge.net>
Message-ID: <14766.55367.854732.727671@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake <fdrake at users.sourceforge.net> writes:

    Fred> "Modules/Setup.in is newer than Moodules/Setup;"; \ !  echo
------------------------------------------^^^
who let the cows in here?



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 00:32:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 00:32:50 +0200 (CEST)
Subject: [Python-Dev] lookdict
Message-ID: <200008312232.AAA14305@python.inrialpes.fr>

I'd like to request some clarifications on the recently checked
dict patch. How it is supposed to work and why is this solution okay?

What's the exact purpose of the 2nd string specialization patch?

Besides that, I must say that now the interpreter is noticeably slower
and MAL and I were warning you kindly about this code, which was
fine tuned over the years. It is very sensible and was optimized to death.
The patch that did make it was labeled "not ready" and I would have
appreciated another round of review. Not that I disagree, but now I feel
obliged to submit another patch to make some obvious perf improvements
(at least), which simply duplicates work... Fred would have done them
very well, but I haven't had the time to say much about the implementation
because the laconic discussion on the Patch Manager went about
functionality.

Now I'd like to bring this on python-dev and see what exactly happened
to lookdict and what the BeOpen team agreed on regarding this function.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gstein at lyra.org  Fri Sep  1 03:51:04 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 18:51:04 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <14766.65024.122762.332972@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 31, 2000 at 08:53:20PM -0400
References: <200009010002.RAA23432@slayer.i.sourceforge.net> <14766.65024.122762.332972@bitdiddle.concentric.net>
Message-ID: <20000831185103.D3278@lyra.org>

On Thu, Aug 31, 2000 at 08:53:20PM -0400, Jeremy Hylton wrote:
> Any opinion on whether the Py_SetRecursionLimit should do sanity
> checking on its arguments?

-1 ... it's an advanced function. It's the caller's problem if they monkey
it up.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From gstein at lyra.org  Fri Sep  1 04:12:08 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 19:12:08 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <200009010002.RAA23432@slayer.i.sourceforge.net>; from tim_one@users.sourceforge.net on Thu, Aug 31, 2000 at 05:02:01PM -0700
References: <200009010002.RAA23432@slayer.i.sourceforge.net>
Message-ID: <20000831191208.G3278@lyra.org>

On Thu, Aug 31, 2000 at 05:02:01PM -0700, Tim Peters wrote:
> Update of /cvsroot/python/python/dist/src/Python
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv20859/python/dist/src/Python
> 
> Modified Files:
> 	ceval.c 
> Log Message:
> Supply missing prototypes for new Py_{Get,Set}RecursionLimit; fixes compiler wngs;
> un-analize Get's definition ("void" is needed only in declarations, not defns, &
> is generally considered bad style in the latter).

wtf? Placing a void in both declaration *and* definition is "good style".

static int foo(void) { ... }
int bar() { ... }

You're setting yourself up for inconsistency if you don't always use a
prototypical definition. In the above example, foo() must be
declared/defined using a prototype (or you get warnings from gcc when you
compile with -Wmissing-prototypes (which is recommended for developers)).
But you're saying bar() should *not* have a prototype.


-1 on dropping the "void" from the definition. I disagree it is bad form,
and it sets us up for inconsistencies.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From gward at python.net  Fri Sep  1 04:10:47 2000
From: gward at python.net (Greg Ward)
Date: Thu, 31 Aug 2000 19:10:47 -0700
Subject: [Python-Dev] ANNOUNCE: Distutils 0.9.2
Message-ID: <20000831191047.C31473@python.net>

...just in time for the Python 2.0b1 feature freeze, Distutils 0.9.2 has
been released.  Changes since 0.9.1:

  * fixed bug that broke extension-building under Windows for older
    setup scripts (not using the new Extension class)
      
  * new version of bdist_wininst command and associated tools: fixes
    some bugs, produces a smaller exeuctable, and has a nicer GUI
    (thanks to Thomas Heller)
		
  * added some hooks to 'setup()' to allow some slightly sneaky ways
    into the Distutils, in addition to the standard "run 'setup()'
    from a setup script"
	
Get your copy today:

  http://www.python.org/sigs/distutils-sig/download.html
  
        Greg



From jeremy at beopen.com  Fri Sep  1 04:40:25 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 22:40:25 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
Message-ID: <14767.5913.521593.234904@bitdiddle.concentric.net>

Quick note on BDFL-approved style for C code.

I recently changed a line in gcmodule.c from
static int debug;
to 
static int debug = 0;

The change is redundant, as several people pointed out, because the C
std requires debug to be initialized to 0.  I didn't realize this.
Inadvertently, however, I made the right change.  The preferred style
is to be explicit about initialization if other code depends on or
assumes that it is initialized to a particular value -- even if that
value is 0.

If the code is guaranteed to do an assignment of its own before the
first use, it's okay to omit the initialization with the decl.

Jeremy






From greg at cosc.canterbury.ac.nz  Fri Sep  1 04:37:36 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 01 Sep 2000 14:37:36 +1200 (NZST)
Subject: [Python-Dev] Pragmas: Just say "No!"
In-Reply-To: <39AE5E79.C2C91730@lemburg.com>
Message-ID: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal at lemburg.com>:

> If it's just the word itself that's bugging you, then
> we can have a separate discussion on that. Perhaps "assume"
> or "declare" would be a better candidates.

Yes, "declare" would be better. ALthough I'm still somewhat
uncomfortable with the idea of naming a language feature
before having a concrete example of what it's going to be
 used for.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From guido at beopen.com  Fri Sep  1 05:54:10 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 22:54:10 -0500
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: Your message of "Thu, 31 Aug 2000 18:16:55 EST."
             <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org> 
References: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org> 
Message-ID: <200009010354.WAA30234@cj20424-a.reston1.va.home.com>

> A quick glance at audiodev.py shows that it seems to support only
> Sun and SGI.  Should it be extended?

Yes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep  1 06:00:37 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:00:37 -0500
Subject: [Python-Dev] Namespace collision between lib/xml and site-packages/xml
In-Reply-To: Your message of "Fri, 01 Sep 2000 06:29:47 +0800."
             <39AEDC5B.333F737E@per.dem.csiro.au> 
References: <39AEDC5B.333F737E@per.dem.csiro.au> 
Message-ID: <200009010400.XAA30273@cj20424-a.reston1.va.home.com>

> On July 26 I reported that the new xml package in the standard library
> collides with and overrides the xml package from the xml-sig that may be
> installed in site-packages. This is still the case. The new package does
> not have the same functionality as the one in site-packages, and hence
> my application (and others relying on similar functionality) gets an
> import error. I understood that it was planned that the new library xml
> package would check for the site-package version, and transparently hand
> over to it if it existed. It's not really an option to remove/rename the
> xml package in the std lib, or to break existing xml-based code...
> 
> Of course, this might be fixed by 2.0b1, or is it a feature that will be
> frozen out <wry smile>?
> 
> Fred's response was:
> "  I expect we'll be making the package in site-packages an extension
> provider for the xml package in the standard library.  I'm planning to
> discuss this issue at today's PythonLabs meeting." 

I remember our group discussion about this.  What's currently
implemented is what we decided there, based upon (Fred's
representation of) what the XML-sig wanted.  So you don't like this
either, right?

I believe there are two conflicting desires here: (1) the standard XML
package by the core should be named simply "xml"; (2) you want the old
XML-sig code (which lives in a package named "xml" but installed in
site-packages) to override the core xml package.

I don't think that's possible -- at least not without a hack that's
too ugly to accept.

You might be able to get the old XML-sig code to override the core xml
package by creating a symlink named _xmlplus to it in site-packages
though.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep  1 06:04:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:04:02 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: Your message of "Thu, 31 Aug 2000 19:12:08 MST."
             <20000831191208.G3278@lyra.org> 
References: <200009010002.RAA23432@slayer.i.sourceforge.net>  
            <20000831191208.G3278@lyra.org> 
Message-ID: <200009010404.XAA30306@cj20424-a.reston1.va.home.com>

> You're setting yourself up for inconsistency if you don't always use a
> prototypical definition. In the above example, foo() must be
> declared/defined using a prototype (or you get warnings from gcc when you
> compile with -Wmissing-prototypes (which is recommended for developers)).
> But you're saying bar() should *not* have a prototype.
> 
> 
> -1 on dropping the "void" from the definition. I disagree it is bad form,
> and it sets us up for inconsistencies.

We discussed this briefly today in our group chat, and I'm +0 or
Greg's recommendation (that's +0 on keeping (void) in definitions).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Fri Sep  1 05:12:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:12:25 -0400
Subject: [Python-Dev] RE: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <20000831191208.G3278@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFJHDAA.tim_one@email.msn.com>

[Greg Stein]
> ...
> static int foo(void) { ... }
> int bar() { ... }
>
> You're setting yourself up for inconsistency if you don't always use a
> prototypical definition. In the above example, foo() must be
> declared/defined using a prototype (or you get warnings from gcc when you
> compile with -Wmissing-prototypes (which is recommended for developers)).
> But you're saying bar() should *not* have a prototype.

This must be about the pragmatics of gcc, as the C std doesn't say any of
that stuff -- to the contrary, in a *definition* (as opposed to a
declaration), bar() and bar(void) are identical in meaning (as far as the
std goes).

But I confess I don't use gcc at the moment, and have mostly used C
grudgingly the past 5 years when porting things to C++, and my "bad style"
really came from the latter (C++ doesn't cater to K&R-style decls or
"Miranda prototypes" at all, so "thing(void)" is just an eyesore there).

> -1 on dropping the "void" from the definition. I disagree it is bad form,
> and it sets us up for inconsistencies.

Good enough for me -- I'll change it back.






From fdrake at beopen.com  Fri Sep  1 05:28:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 23:28:59 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: <14767.5913.521593.234904@bitdiddle.concentric.net>
References: <14767.5913.521593.234904@bitdiddle.concentric.net>
Message-ID: <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > The change is redundant, as several people pointed out, because the C
 > std requires debug to be initialized to 0.  I didn't realize this.
 > Inadvertently, however, I made the right change.  The preferred style
 > is to be explicit about initialization if other code depends on or
 > assumes that it is initialized to a particular value -- even if that
 > value is 0.

  According to the BDFL?  He's told me *not* to do that if setting it
to 0 (or NULL, in case of a pointer), but I guess that was several
years ago now (before I went to CNRI, I think).
  I need to get a style guide written, I suppose!  -sigh-
  (I agree the right thing is to use explicit initialization, and
would go so far as to say to *always* use it for readability and
robustness in the face of changing code.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Fri Sep  1 05:37:41 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 23:37:41 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
References: <14767.5913.521593.234904@bitdiddle.concentric.net>
	<14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
Message-ID: <14767.9349.324188.289319@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Jeremy Hylton writes:
  >> The change is redundant, as several people pointed out, because
  >> the C std requires debug to be initialized to 0.  I didn't
  >> realize this.  Inadvertently, however, I made the right change.
  >> The preferred style is to be explicit about initialization if
  >> other code depends on or assumes that it is initialized to a
  >> particular value -- even if that value is 0.

  FLD>   According to the BDFL?  He's told me *not* to do that if
  FLD>   setting it
  FLD> to 0 (or NULL, in case of a pointer), but I guess that was
  FLD> several years ago now (before I went to CNRI, I think).

It's these chat sessions.  They bring out the worst in him <wink>.

Jeremy



From guido at beopen.com  Fri Sep  1 06:36:05 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:36:05 -0500
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: Your message of "Thu, 31 Aug 2000 23:28:59 -0400."
             <14767.8827.492944.536878@cj42289-a.reston1.va.home.com> 
References: <14767.5913.521593.234904@bitdiddle.concentric.net>  
            <14767.8827.492944.536878@cj42289-a.reston1.va.home.com> 
Message-ID: <200009010436.XAA06824@cj20424-a.reston1.va.home.com>

> Jeremy Hylton writes:
>  > The change is redundant, as several people pointed out, because the C
>  > std requires debug to be initialized to 0.  I didn't realize this.
>  > Inadvertently, however, I made the right change.  The preferred style
>  > is to be explicit about initialization if other code depends on or
>  > assumes that it is initialized to a particular value -- even if that
>  > value is 0.

Fred:
>   According to the BDFL?  He's told me *not* to do that if setting it
> to 0 (or NULL, in case of a pointer), but I guess that was several
> years ago now (before I went to CNRI, I think).

Can't remember that now.  I told Jeremy what he wrote here.

>   I need to get a style guide written, I suppose!  -sigh-

Yes!

>   (I agree the right thing is to use explicit initialization, and
> would go so far as to say to *always* use it for readability and
> robustness in the face of changing code.)

No -- initializing variables that are assigned to first thing later is
less readable.  The presence or absence of the initialization should
be a subtle hint on whether the initial value is used.  If the code
changes, change the initialization.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Fri Sep  1 05:40:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:40:47 -0400
Subject: [Python-Dev] test_popen2 broken on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>

FYI, we know that test_popen2 is broken on Windows.  I'm in the process of
fixing it.





From fdrake at beopen.com  Fri Sep  1 05:42:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 23:42:59 -0400 (EDT)
Subject: [Python-Dev] test_popen2 broken on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>
Message-ID: <14767.9667.205457.791956@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > FYI, we know that test_popen2 is broken on Windows.  I'm in the process of
 > fixing it.

  If you can think of a good test case for os.popen4(), I'd love to
see it!  I couldn't think of one earlier that even had a remote chance
of being portable.  If you can make one that passes on Windows, I'll
either adapt it or create an alternate for Unix.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Fri Sep  1 05:55:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:55:41 -0400
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
In-Reply-To: <20000831082821.B3569@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFMHDAA.tim_one@email.msn.com>

{Trent Mick]
> Tim (or anyone with python-list logs), can you forward this to Sachin (who
> reported the bug).

Sorry for not getting back to you sooner,  I just fwd'ed the fellow's
problem as an FYI for the Python-Dev'ers, not as something crucial for
2.0b1.  His symptom is a kernel panic in what looked like a pre-release OS,
and that's certainly not your fault!  Like he said:

>> I guess my next step is to log a bug with Apple.

Since nobody else spoke up, I'll fwd your msg to him eventually, but that
will take a little time to find his address via DejaNews, & it's not a
priority tonight.





From tim_one at email.msn.com  Fri Sep  1 06:03:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 00:03:18 -0400
Subject: [Python-Dev] test_popen2 broken on Windows
In-Reply-To: <14767.9667.205457.791956@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEFNHDAA.tim_one@email.msn.com>

[Fred]
>   If you can think of a good test case for os.popen4(), I'd love to
> see it!  I couldn't think of one earlier that even had a remote chance
> of being portable.  If you can make one that passes on Windows, I'll
> either adapt it or create an alternate for Unix.  ;)

Not tonight.  I've never used popen4 in my life, and disapprove of almost
all functions with trailing digits in their names.  Also most movies,  and
especially after "The Hidden 2".  How come nobody writes song sequels?
"Stairway to Heaven 2", say, or "Beethoven's Fifth Symphony 3"?  That's one
for Barry to ponder ...

otoh-trailing-digits-are-a-sign-of-quality-in-an-os-name-ly y'rs  - tim





From Mark.Favas at per.dem.csiro.au  Fri Sep  1 09:31:57 2000
From: Mark.Favas at per.dem.csiro.au (Favas, Mark (EM, Floreat))
Date: Fri, 1 Sep 2000 15:31:57 +0800 
Subject: [Python-Dev] Namespace collision between lib/xml and site-pac
	kages/xml
Message-ID: <C03F68DA202BD411B00700B0D022B09E1AD950@martok.wa.CSIRO.AU>

Guido wrote:
>I remember our group discussion about this.  What's currently
>implemented is what we decided there, based upon (Fred's
>representation of) what the XML-sig wanted.  So you don't like this
>either, right?

Hey - not so. I saw the original problem, asked about it, was told it would
be discussed, heard nothing of the results of the disccussion, saw that I
still had the same problem close to the release of 2.0b1, thought maybe it
had slipped through the cracks, and asked again in an effort to help. I
apologise if it came across in any other way.

>I believe there are two conflicting desires here: (1) the standard XML
>package by the core should be named simply "xml"; (2) you want the old
>XML-sig code (which lives in a package named "xml" but installed in
>site-packages) to override the core xml package.

I'm happy with (1) being the standard XML package - I thought from Fred's
original post that there might be some way of having both work together. 

>I don't think that's possible -- at least not without a hack that's
>too ugly to accept.

Glad to have this clarified.

>You might be able to get the old XML-sig code to override the core xml
>package by creating a symlink named _xmlplus to it in site-packages
>though.

Thanks for the suggestion - I'll try it. Since my code has to run on Windows
as well, probably the best thing I can do is bundle up the xml-sig stuff in
my distribution, call it something else, and get around it all that way.

Mark



From thomas at xs4all.net  Fri Sep  1 09:41:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 09:41:24 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <200009010002.RAA23432@slayer.i.sourceforge.net>; from tim_one@users.sourceforge.net on Thu, Aug 31, 2000 at 05:02:01PM -0700
References: <200009010002.RAA23432@slayer.i.sourceforge.net>
Message-ID: <20000901094123.L12695@xs4all.nl>

On Thu, Aug 31, 2000 at 05:02:01PM -0700, Tim Peters wrote:

> Log Message:
> Supply missing prototypes for new Py_{Get,Set}RecursionLimit; fixes compiler wngs;
> un-analize Get's definition ("void" is needed only in declarations, not defns, &
> is generally considered bad style in the latter).

Funny. I asked this while ANSIfying, and opinions where, well, scattered :)
There are a lot more where that one came from. (See the Modules/ subdir
<wink>)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Sep  1 09:54:09 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 09:54:09 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.50,2.51
In-Reply-To: <200009010239.TAA27288@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Thu, Aug 31, 2000 at 07:39:03PM -0700
References: <200009010239.TAA27288@slayer.i.sourceforge.net>
Message-ID: <20000901095408.M12695@xs4all.nl>

On Thu, Aug 31, 2000 at 07:39:03PM -0700, Guido van Rossum wrote:

> Add parens suggested by gcc -Wall.

No! This groups the checks wrong. HASINPLACE(v) *has* to be true for any of
the other tests to happen. I apologize for botching the earlier 2 versions
and failing to check them, I've been a bit swamped in work the past week :P
I've checked them in the way they should be. (And checked, with gcc -Wall,
this time. The error is really gone.)

> ! 	else if (HASINPLACE(v)
>   		  && ((v->ob_type->tp_as_sequence != NULL &&
> ! 		      (f = v->ob_type->tp_as_sequence->sq_inplace_concat) != NULL))
>   		 || (v->ob_type->tp_as_number != NULL &&
>   		     (f = v->ob_type->tp_as_number->nb_inplace_add) != NULL))
> --- 814,821 ----
>   			return x;
>   	}
> ! 	else if ((HASINPLACE(v)
>   		  && ((v->ob_type->tp_as_sequence != NULL &&
> ! 		       (f = v->ob_type->tp_as_sequence->sq_inplace_concat)
> ! 		       != NULL)))
>   		 || (v->ob_type->tp_as_number != NULL &&
>   		     (f = v->ob_type->tp_as_number->nb_inplace_add) != NULL))

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Fri Sep  1 10:43:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 10:43:56 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz>
Message-ID: <39AF6C4C.62451C87@lemburg.com>

Greg Ewing wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com>:
> 
> > If it's just the word itself that's bugging you, then
> > we can have a separate discussion on that. Perhaps "assume"
> > or "declare" would be a better candidates.
> 
> Yes, "declare" would be better. ALthough I'm still somewhat
> uncomfortable with the idea of naming a language feature
> before having a concrete example of what it's going to be
>  used for.

I gave some examples in the other pragma thread. The main
idea behind "declare" is to define flags at compilation
time, the encoding of string literals being one of the
original motivations for introducing these flags:

declare encoding = "latin-1"
x = u"This text will be interpreted as Latin-1 and stored as Unicode"

declare encoding = "ascii"
y = u"This is supposed to be ASCII, but contains ??? Umlauts - error !"

A similar approach could be done for 8-bit string literals
provided that the default encoding allows storing the
decoded values.

Say the default encoding is "utf-8", then you could write:

declare encoding = "latin-1"
x = "These are the German Umlauts: ???"
# x would the be assigned the corresponding UTF-8 value of that string

Another motivation for using these flags is providing the
compiler with information about possible assumptions it
can make:

declare globals = "constant"

The compiler can then add code which caches all global
lookups in locals for subsequent use.

The reason I'm advertising a new keyword is that we need
a way to tell the compiler about these things from within
the source file. This is currently not possible, but is needed
to allow different modules (from possibly different authors)
to work together without the need to adapt their source
files.

Which flags will actually become available is left to 
a different discussion.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  1 10:55:09 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 10:55:09 +0200
Subject: [Python-Dev] lookdict
References: <200008312232.AAA14305@python.inrialpes.fr>
Message-ID: <39AF6EED.7A591932@lemburg.com>

Vladimir Marangozov wrote:
> 
> I'd like to request some clarifications on the recently checked
> dict patch. How it is supposed to work and why is this solution okay?
> 
> What's the exact purpose of the 2nd string specialization patch?
> 
> Besides that, I must say that now the interpreter is noticeably slower
> and MAL and I were warning you kindly about this code, which was
> fine tuned over the years. It is very sensible and was optimized to death.
> The patch that did make it was labeled "not ready" and I would have
> appreciated another round of review. Not that I disagree, but now I feel
> obliged to submit another patch to make some obvious perf improvements
> (at least), which simply duplicates work... Fred would have done them
> very well, but I haven't had the time to say much about the implementation
> because the laconic discussion on the Patch Manager went about
> functionality.
> 
> Now I'd like to bring this on python-dev and see what exactly happened
> to lookdict and what the BeOpen team agreed on regarding this function.

Just for the record:

Python 1.5.2: 3050 pystones
Python 2.0b1: 2850 pystones before the lookup patch
              2900 pystones after the lookup patch
My old considerably patched Python 1.5:
              4000 pystones

I like Fred's idea about the customized and auto-configuring
lookup mechanism. This should definitely go into 2.1... perhaps
even with a hook that allows C extensions to drop in their own
implementations for certain types of dictionaries, e.g. ones
using perfect hash tables.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Fri Sep  1 11:11:15 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 1 Sep 2000 05:11:15 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.58306.977241.439169@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10009010506380.1061-100000@skuld.lfw.org>

On Thu, 31 Aug 2000, Charles G Waldman wrote:
>  >     3. Play some raw audio data, given a string of bytes and a
>  >        sampling rate.
> 
> This would never be possible unless you also specifed the format and
> encoding of the raw data - are they 8bit, 16-bit, signed, unsigned,
> bigendian, littlendian, linear, logarithmic ("mu_law"), etc?

You're right, you do have to specify such things.  But when you
do, i'm quite confident that this should be possible, at least
for a variety of common cases.  Certainly raw audio data should
be playable in at least *some* fashion, and we also have a bunch
of very nice functions in the audioop module that can do automatic
conversions if we want to get fancy.

> Trying to do anything with sound in a
> platform-independent manner is near-impossible.  Even the same
> "platform" (e.g. RedHat 6.2 on Intel) will behave differently
> depending on what soundcard is installed.

Are you talking about OSS vs. ALSA?  Didn't they at least try to
keep some of the basic parts of the interface the same?


-- ?!ng




From moshez at math.huji.ac.il  Fri Sep  1 11:42:58 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 1 Sep 2000 12:42:58 +0300 (IDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.42287.968420.289804@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10009011242120.22219-100000@sundial>

On Thu, 31 Aug 2000, Jeremy Hylton wrote:

> Is the test for linuxaudiodev supposed to play the Spanish Inquistion
> .au file?  I just realized that the test does absolutely nothing on my
> machine.  (I guess I need to get my ears to raise an exception if they
> don't hear anything.)
> 
> I can play the .au file and I use a variety of other audio tools
> regularly.  Is Peter still maintaining it or can someone else offer
> some assistance?

It's probably not the case, but check it isn't skipped. I've added code to
liberally skip it in case the user has no permission or no soundcard.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Sep  1 13:34:46 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 07:34:46 -0400
Subject: [Python-Dev] Prerelease Python fun on Windows!
Message-ID: <LNBBLJKPBEHFEDALKOLCIEGJHDAA.tim_one@email.msn.com>

A prerelease of the Python2.0b1 Windows installer is now available via
anonymous FTP, from

    python.beopen.com

file

    /pub/windows/beopen-python2b1p1-20000901.exe
    5,766,988 bytes

Be sure to set FTP Binary mode before you get it.

This is not *the* release.  Indeed, the docs are still from some old
pre-beta version of Python 1.6 (sorry, Fred, but I'm really sleepy!).  What
I'm trying to test here is the installer, and the basic integrity of the
installation.  A lot has changed, and we hope all for the better.

Points of particular interest:

+ I'm running a Win98SE laptop.  The install works great for me.  How
  about NT?  2000?  95?  ME?  Win64 <shudder>?

+ For the first time ever, the Windows installer should *not* require
  adminstrator privileges under NT or 2000.  This is untested.  If you
  log in as an adminstrator, it should write Python's registry info
  under HKEY_LOCAL_MACHINE.  If not an adminstrator, it should pop up
  an informative message and write the registry info under
  HKEY_CURRENT_USER instead.  Does this work?  This prerelease includes
  a patch from Mark Hammond that makes Python look in HKCU before HKLM
  (note that that also allows users to override the HKLM settings, if
  desired).

+ Try
    python lib/test/regrtest.py

  test_socket is expected to fail if you're not on a network, or logged
  into your ISP, at the time your run the test suite.  Otherwise
  test_socket is expected to pass.  All other tests are expected to
  pass (although, as always, a number of Unix-specific tests should get
  skipped).

+ Get into a DOS-box Python, and try

      import Tkinter
      Tkinter._test()

  This installation of Python should not interfere with, or be damaged
  by, any other installation of Tcl/Tk you happen to have lying around.
  This is also the first time we're using Tcl/Tk 8.3.2, and that needs
  wider testing too.

+ If the Tkinter test worked, try IDLE!
  Start -> Programs -> Python20 -> IDLE.

+ There is no time limit on this installation.  But if you use it for
  more than 30 days, you're going to have to ask us to pay you <wink>.

windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim





From nascheme at enme.ucalgary.ca  Fri Sep  1 15:34:46 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 07:34:46 -0600
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <200009010401.VAA20868@slayer.i.sourceforge.net>; from Jeremy Hylton on Thu, Aug 31, 2000 at 09:01:59PM -0700
References: <200009010401.VAA20868@slayer.i.sourceforge.net>
Message-ID: <20000901073446.A4782@keymaster.enme.ucalgary.ca>

On Thu, Aug 31, 2000 at 09:01:59PM -0700, Jeremy Hylton wrote:
> set the default threshold much higher
> we don't need to run gc frequently

Are you sure setting it that high (5000 as opposed to 100) is a good
idea?  Did you do any benchmarking?  If with-gc is going to be on by
default in 2.0 then I would agree with setting it high.  If the GC is
optional then I think it should be left as it is.  People explicitly
enabling the GC obviously have a problem with cyclic garbage.

So, is with-gc going to be default?  At this time I would vote no.

  Neil



From jeremy at beopen.com  Fri Sep  1 16:24:46 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 1 Sep 2000 10:24:46 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <20000901073446.A4782@keymaster.enme.ucalgary.ca>
References: <200009010401.VAA20868@slayer.i.sourceforge.net>
	<20000901073446.A4782@keymaster.enme.ucalgary.ca>
Message-ID: <14767.48174.81843.299662@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

  NS> On Thu, Aug 31, 2000 at 09:01:59PM -0700, Jeremy Hylton wrote:
  >> set the default threshold much higher we don't need to run gc
  >> frequently

  NS> Are you sure setting it that high (5000 as opposed to 100) is a
  NS> good idea?  Did you do any benchmarking?  If with-gc is going to
  NS> be on by default in 2.0 then I would agree with setting it high.
  NS> If the GC is optional then I think it should be left as it is.
  NS> People explicitly enabling the GC obviously have a problem with
  NS> cyclic garbage.

  NS> So, is with-gc going to be default?  At this time I would vote
  NS> no.

For 2.0b1, it will be on by default, which is why I set the threshold
so high.  If we get a lot of problem reports, we can change either
decision for 2.0 final.

Do you disagree?  If so, why?

Even people who do have problems with cyclic garbage don't necessarily
need a collection every 100 allocations.  (Is my understanding of what
the threshold measures correct?)  This threshold causes GC to occur so
frequently that it can happen during the *compilation* of a small
Python script.

Example: The code in Tools/compiler seems to have a cyclic reference
problem, because it's memory consumption drops when GC is enabled.
But the difference in total memory consumption with the threshold at
100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

Jeremy



From skip at mojam.com  Fri Sep  1 16:13:39 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 09:13:39 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
Message-ID: <14767.47507.843792.223790@beluga.mojam.com>

I'm trying to get Zope 2.2.1 to build to I can use gc to track down a memory 
leak.  In working my way through some compilation errors I noticed that
Zope's cPickle.c appears to be somewhat different than Python's version.
(Haven't checked cStringIO.c yet, but I imagine there may be a couple
differences there as well.)

Should we try to sync them up before 2.0b1?  Before 2.0final?  Wait until
2.1?  If so, should I post a patch to the SourceForge Patch Manager or send
diffs to Jim (or both)?

Skip



From thomas at xs4all.net  Fri Sep  1 16:34:52 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 16:34:52 +0200
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEGJHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Sep 01, 2000 at 07:34:46AM -0400
Message-ID: <20000901163452.N12695@xs4all.nl>

On Fri, Sep 01, 2000 at 07:34:46AM -0400, Tim Peters wrote:

> + I'm running a Win98SE laptop.  The install works great for me.  How
>   about NT?  2000?  95?  ME?  Win64 <shudder>?

It runs fine under Win98 (FE) on my laptop.

> + Try
>     python lib/test/regrtest.py

No strange failures.

> + Get into a DOS-box Python, and try
> 
>       import Tkinter
>       Tkinter._test()
> 
>   This installation of Python should not interfere with, or be damaged
>   by, any other installation of Tcl/Tk you happen to have lying around.
>   This is also the first time we're using Tcl/Tk 8.3.2, and that needs
>   wider testing too.

Correctly uses 8.3.2, and not the 8.1 (or so) that came with Python 1.5.2

> + If the Tkinter test worked, try IDLE!
>   Start -> Programs -> Python20 -> IDLE.

Works, too. I had a funny experience, though. I tried to quit the
interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.
And then I started IDLE, and IDLE started up, the menus worked, I could open
a new window, but I couldn't type anything. And then I had a bluescreen. But
after the reboot, everything worked fine, even doing the exact same things.

Could just be windows crashing on me, it does that often enough, even on
freshly installed machines. Something about bad karma or something ;)

> + There is no time limit on this installation.  But if you use it for
>   more than 30 days, you're going to have to ask us to pay you <wink>.

> windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim

"Hmmm... I think I'll call you lunch."

(Well, Windows may not be green, but it's definately not ripe yet! Not for
me, anyway :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Sep  1 17:43:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 10:43:32 -0500
Subject: [Python-Dev] _PyPclose
Message-ID: <200009011543.KAA09487@cj20424-a.reston1.va.home.com>

The _PyPclose fix looks good, Tim!

The sad thing is that if they had implemented their own data structure
to keep track of the mapping between files and processes, none of this
would have been necessary.  Look:

_PyPopenProcs is a dictionary whose keys are FILE* pointers wrapped in
Python longs, and whose values are lists of length 2 containing a
process handle and a file count.  Pseudocode:

# global:
    _PyPopenProcs = None

# in _PyPopen:
    global _PyPopenProcs
    if _PyPopenProcs is None:
        _PyPopenProcs = {}
    files = <list of files created>
    list = [process_handle, len(files)]
    for file in files:
	_PyPopenProcs[id(file)] = list

# in _PyPclose(file):
    global _PyPopenProcs
    list = _PyPopenProcs[id(file)]
    nfiles = list[1]
    if nfiles > 1:
	list[1] = nfiles-1
    else:
	<wait for the process status>
    del _PyPopenProcs[id(file)]
    if len(_PyPopenProcs) == 0:
        _PyPopenProcs = None

This expands to pages of C code!  There's a *lot* of code dealing with
creating the Python objects, error checking, etc.  I bet that it all
would become much smaller and more readable if a custom C-based data
structure was used.  A linked list associating files with processes
would be all that's needed.  We can even aford a linear search of the
list to see if we just closed the last file open for this process.

Sigh.  Maybe for another time.

(That linked list would require a lock of its own.  Fine.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Fri Sep  1 17:03:30 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 10:03:30 -0500 (CDT)
Subject: [Python-Dev] DEBUG_SAVEALL feature for gc not in 2.0b1?
Message-ID: <14767.50498.896689.445018@beluga.mojam.com>


Neil sent me a patch a week or two ago that implemented a DEBUG_SAVEALL flag
for the gc module.  If set, it assigns all cyclic garbage to gc.garbage
instead of deleting it, thus resurrecting the garbage so you can inspect it.
This seems not to have made it into the CS repository.

I think this is good mojo and deserves to be in the distribution, if not for
the release, then for 2.1 at least.  I've attached the patch Neil sent me
(which includes code, doc and test updates).  It's helped me track down one
(stupid) cyclic trash bug in my own code.  Neil, unless there are strong
arguments to the contrary, I recommend you submit a patch to SF.

Skip

-------------- next part --------------
A non-text attachment was scrubbed...
Name: saveall.patch
Type: application/octet-stream
Size: 9275 bytes
Desc: patch to get gc to resurrect garbage instead of freeing it
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000901/b387b9cb/attachment.obj>

From guido at beopen.com  Fri Sep  1 18:31:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 11:31:26 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 10:55:09 +0200."
             <39AF6EED.7A591932@lemburg.com> 
References: <200008312232.AAA14305@python.inrialpes.fr>  
            <39AF6EED.7A591932@lemburg.com> 
Message-ID: <200009011631.LAA09876@cj20424-a.reston1.va.home.com>

Thanks, Marc-Andre, for pointing out that Fred's lookdict code is
actually an improvement.

The reason for all this is that we found that lookdict() calls
PyObject_Compare() without checking for errors.  If there's a key that
raises an error when compared to another key, the keys compare unequal
and an exception is set, which may disturb an exception that the
caller of PyDict_GetItem() might be calling.  PyDict_GetItem() is
documented as never raising an exception.  This is actually not strong
enough; it was actually intended to never clear an exception either.
The potential errors from PyObject_Compare() violate this contract.
Note that these errors are nothing new; PyObject_Compare() has been
able to raise exceptions for a long time, e.g. from errors raised by
__cmp__().

The first-order fix is to call PyErr_Fetch() and PyErr_restore()
around the calls to PyObject_Compare().  This is slow (for reasons
Vladimir points out) even though Fred was very careful to only call
PyErr_Fetch() or PyErr_Restore() when absolutely necessary and only
once per lookdict call.  The second-order fix therefore is Fred's
specialization for string-keys-only dicts.

There's another problem: as fixed, lookdict needs a current thread
state!  (Because the exception state is stored per thread.)  There are
cases where PyDict_GetItem() is called when there's no thread state!
The first one we found was Tim Peters' patch for _PyPclose (see
separate message).  There may be others -- we'll have to fix these
when we find them (probably after 2.0b1 is released but hopefully
before 2.0 final).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Fri Sep  1 17:42:01 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 1 Sep 2000 11:42:01 -0400
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <14767.47507.843792.223790@beluga.mojam.com>; from skip@mojam.com on Fri, Sep 01, 2000 at 09:13:39AM -0500
References: <14767.47507.843792.223790@beluga.mojam.com>
Message-ID: <20000901114201.B5855@kronos.cnri.reston.va.us>

On Fri, Sep 01, 2000 at 09:13:39AM -0500, Skip Montanaro wrote:
>leak.  In working my way through some compilation errors I noticed that
>Zope's cPickle.c appears to be somewhat different than Python's version.
>(Haven't checked cStringIO.c yet, but I imagine there may be a couple
>differences there as well.)

There are also diffs in cStringIO.c, though not ones that affect
functionality: ANSI-fication, and a few changes to the Python API
(PyObject_Length -> PyObject_Size, PyObject_NEW -> PyObject_New, &c).

The cPickle.c changes look to be:
    * ANSIfication.
    * API changes.
    * Support for Unicode strings.

The API changes are the most annoying ones, since you need to add
#ifdefs in order for the module to compile with both 1.5.2 and 2.0.
(Might be worth seeing if this can be alleviated with a few strategic
macros, though I think not...)

--amk




From nascheme at enme.ucalgary.ca  Fri Sep  1 17:48:21 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 09:48:21 -0600
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <14767.48174.81843.299662@bitdiddle.concentric.net>; from Jeremy Hylton on Fri, Sep 01, 2000 at 10:24:46AM -0400
References: <200009010401.VAA20868@slayer.i.sourceforge.net> <20000901073446.A4782@keymaster.enme.ucalgary.ca> <14767.48174.81843.299662@bitdiddle.concentric.net>
Message-ID: <20000901094821.A5571@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
> Even people who do have problems with cyclic garbage don't necessarily
> need a collection every 100 allocations.  (Is my understanding of what
> the threshold measures correct?)

It collects every net threshold0 allocations.  If you create and delete
1000 container objects in a loop then no collection would occur.

> But the difference in total memory consumption with the threshold at
> 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

The last time I did benchmarks with PyBench and pystone I found that the
difference between threshold0 = 100 and threshold0 = 0 (ie. infinity)
was small.  Remember that the collector only counts container objects.
Creating a thousand dicts with lots of non-container objects inside of
them could easily cause an out of memory situation.

Because of the generational collection usually only threshold0 objects
are examined while collecting.  Thus, setting threshold0 low has the
effect of quickly moving objects into the older generations.  Collection
is quick because only a few objects are examined.  

A portable way to find the total allocated memory would be nice.
Perhaps Vladimir's malloc will help us here.  Alternatively we could
modify PyCore_MALLOC to keep track of it in a global variable.  I think
collecting based on an increase in the total allocated memory would work
better.  What do you think?

More benchmarks should be done too.  Your compiler would probably be a
good candidate.  I won't have time today but maybe tonight.

  Neil



From gward at mems-exchange.org  Fri Sep  1 17:49:45 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 1 Sep 2000 11:49:45 -0400
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>; from ping@lfw.org on Thu, Aug 31, 2000 at 06:16:55PM -0500
References: <14766.50976.102853.695767@buffalo.fnal.gov> <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
Message-ID: <20000901114945.A15688@ludwig.cnri.reston.va.us>

On 31 August 2000, Ka-Ping Yee said:
> Just so you know -- i was definitely able to get this to work at
> some point before when we were trying to fix this.  I changed
> test_linuxaudiodev and it played the .AU file correctly.  I haven't
> had time to survey what the state of the various modules is now,
> though -- i'll have a look around and see what's going on.

I have three copies of test_linuxaudiodev.py in my Lib/test directory:
the original, Ping's version, and Michael Hudson's version.  I can't
remember who hacked whose, ie. if Michael or Ping's is earlier.
Regardless, none of them work.  Here's how they fail:

$ /python Lib/test/regrtest.py test_linuxaudiodev
test_linuxaudiodev
1 test OK.

...but the sound is horrible: various people opined on this list, many
months ago when I first reported the problem, that it's probably a
format problem.  (The wav/au mixup seems a likely candidate; it can't be
an endianness problem, since the .au file is 8-bit!)

$ ./python Lib/test/regrtest.py test_linuxaudiodev-ping
test_linuxaudiodev-ping
Warning: can't open Lib/test/output/test_linuxaudiodev-ping
test test_linuxaudiodev-ping crashed -- audio format not supported by linuxaudiodev: None
1 test failed: test_linuxaudiodev-ping

...no sound.

./python Lib/test/regrtest.py test_linuxaudiodev-hudson
test_linuxaudiodev-hudson
Warning: can't open Lib/test/output/test_linuxaudiodev-hudson
test test_linuxaudiodev-hudson crashed -- linuxaudiodev.error: (11, 'Resource temporarily unavailable')
1 test failed: test_linuxaudiodev-hudson

...this is the oddest one of all: I get the "crashed" message
immediately, but then the sound starts playing.  I hear "Nobody expects
the Spani---" but then it stops, the test script terminates, and I get
the "1 test failed" message and my shell prompt back.

Confused as hell, and completely ignorant of computer audio,

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From nascheme at enme.ucalgary.ca  Fri Sep  1 17:56:27 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 09:56:27 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14767.50498.896689.445018@beluga.mojam.com>; from Skip Montanaro on Fri, Sep 01, 2000 at 10:03:30AM -0500
References: <14767.50498.896689.445018@beluga.mojam.com>
Message-ID: <20000901095627.B5571@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 10:03:30AM -0500, Skip Montanaro wrote:
> Neil sent me a patch a week or two ago that implemented a DEBUG_SAVEALL flag
> for the gc module.

I didn't submit the patch to SF yet because I am thinking of redesigning
the gc module API.  I really don't like the current bitmask interface
for setting options.  The redesign could wait for 2.1 but it would be
nice to not have to change a published API.

Does anyone have any ideas on a good interface for setting various GC
options?  There may be many options and they may change with the
evolution of the collector.  My current idea is to use something like:

    gc.get_option(<name>)

    gc.set_option(<name>, <value>, ...)

with the module defining constants for options.  For example:

    gc.set_option(gc.DEBUG_LEAK, 1)

would enable leak debugging.  Does this look okay?  Should I try to get
it done for 2.0?

  Neil



From guido at beopen.com  Fri Sep  1 19:05:21 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 12:05:21 -0500
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: Your message of "Fri, 01 Sep 2000 16:34:52 +0200."
             <20000901163452.N12695@xs4all.nl> 
References: <20000901163452.N12695@xs4all.nl> 
Message-ID: <200009011705.MAA10274@cj20424-a.reston1.va.home.com>

> Works, too. I had a funny experience, though. I tried to quit the
> interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.

Really?  It didn't exit?  What had you done before?  I do this all the
time without problems.

> And then I started IDLE, and IDLE started up, the menus worked, I could open
> a new window, but I couldn't type anything. And then I had a bluescreen. But
> after the reboot, everything worked fine, even doing the exact same things.
> 
> Could just be windows crashing on me, it does that often enough, even on
> freshly installed machines. Something about bad karma or something ;)

Well, Fredrik Lundh also had some blue screens which he'd reduced to a
DECREF of NULL in _tkinter.  Buyt not fixed, so this may still be
lurking.

On the other hand your laptop might have been screwy already by that
time...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep  1 19:10:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 12:10:35 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.50,2.51
In-Reply-To: Your message of "Fri, 01 Sep 2000 09:54:09 +0200."
             <20000901095408.M12695@xs4all.nl> 
References: <200009010239.TAA27288@slayer.i.sourceforge.net>  
            <20000901095408.M12695@xs4all.nl> 
Message-ID: <200009011710.MAA10327@cj20424-a.reston1.va.home.com>

> On Thu, Aug 31, 2000 at 07:39:03PM -0700, Guido van Rossum wrote:
> 
> > Add parens suggested by gcc -Wall.

Thomas replied:

> No! This groups the checks wrong. HASINPLACE(v) *has* to be true for any of
> the other tests to happen. I apologize for botching the earlier 2 versions
> and failing to check them, I've been a bit swamped in work the past week :P
> I've checked them in the way they should be. (And checked, with gcc -Wall,
> this time. The error is really gone.)

Doh!  Good catch.  But after looking at the code, I understand why
it's so hard to get right: it's indented wrong, and it's got very
convoluted logic.

Suggestion: don't try to put so much stuff in a single if expression!
I find the version below much clearer, even though it may test for
f==NULL a few extra times.  Thomas, can you verify that I haven't
changed the semantics this time?  You can check it in if you like it,
or you can have me check it in.

PyObject *
PyNumber_InPlaceAdd(PyObject *v, PyObject *w)
{
	PyObject * (*f)(PyObject *, PyObject *) = NULL;
	PyObject *x;

	if (PyInstance_Check(v)) {
		if (PyInstance_HalfBinOp(v, w, "__iadd__", &x,
					 PyNumber_Add, 0) <= 0)
			return x;
	}
	else if (HASINPLACE(v)) {
		if (v->ob_type->tp_as_sequence != NULL)
			f = v->ob_type->tp_as_sequence->sq_inplace_concat;
		if (f == NULL && v->ob_type->tp_as_number != NULL)
			f = v->ob_type->tp_as_number->nb_inplace_add;
		if (f != NULL)
			return (*f)(v, w);
	}

	BINOP(v, w, "__add__", "__radd__", PyNumber_Add);

	if (v->ob_type->tp_as_sequence != NULL) {
		f = v->ob_type->tp_as_sequence->sq_concat;
		if (f != NULL)
			return (*f)(v, w);
	}
	if (v->ob_type->tp_as_number != NULL) {
		if (PyNumber_Coerce(&v, &w) != 0)
			return NULL;
		if (v->ob_type->tp_as_number != NULL) {
			f = v->ob_type->tp_as_number->nb_add;
			if (f != NULL)
				x = (*f)(v, w);
		}
		Py_DECREF(v);
		Py_DECREF(w);
		if (f != NULL)
			return x;
	}

	return type_error("bad operand type(s) for +=");
}

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Fri Sep  1 18:23:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 18:23:01 +0200
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
References: <14767.47507.843792.223790@beluga.mojam.com> <20000901114201.B5855@kronos.cnri.reston.va.us>
Message-ID: <39AFD7E5.93C0F437@lemburg.com>

Andrew Kuchling wrote:
> 
> On Fri, Sep 01, 2000 at 09:13:39AM -0500, Skip Montanaro wrote:
> >leak.  In working my way through some compilation errors I noticed that
> >Zope's cPickle.c appears to be somewhat different than Python's version.
> >(Haven't checked cStringIO.c yet, but I imagine there may be a couple
> >differences there as well.)
> 
> There are also diffs in cStringIO.c, though not ones that affect
> functionality: ANSI-fication, and a few changes to the Python API
> (PyObject_Length -> PyObject_Size, PyObject_NEW -> PyObject_New, &c).
> 
> The cPickle.c changes look to be:
>     * ANSIfication.
>     * API changes.
>     * Support for Unicode strings.

Huh ? There is support for Unicode objects in Python's cPickle.c...
does Zope's version do something different ?
 
> The API changes are the most annoying ones, since you need to add
> #ifdefs in order for the module to compile with both 1.5.2 and 2.0.
> (Might be worth seeing if this can be alleviated with a few strategic
> macros, though I think not...)
> 
> --amk
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From skip at mojam.com  Fri Sep  1 18:48:14 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 11:48:14 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <20000901114201.B5855@kronos.cnri.reston.va.us>
References: <14767.47507.843792.223790@beluga.mojam.com>
	<20000901114201.B5855@kronos.cnri.reston.va.us>
Message-ID: <14767.56782.649516.231305@beluga.mojam.com>

    amk> There are also diffs in cStringIO.c, though not ones that affect
    amk> functionality: ...

    amk> The API changes are the most annoying ones, since you need to add
    amk> #ifdefs in order for the module to compile with both 1.5.2 and 2.0.

After posting my note I compared the Zope and Py2.0 versions of cPickle.c.
There are enough differences (ANISfication, gc, unicode support) that it
appears not worthwhile to try and get Python 2.0's cPickle to run under
1.5.2 and 2.0.  I tried simply commenting out the relevant lines in Zope's
lib/Components/Setup file.  Zope built fine without them, though I haven't
yet had a chance to test that configuration.  I don't use either cPickle or
cStringIO, nor do I actually use much of Zope, just ZServer and
DocumentTemplates, so I doubt my code would exercise either module heavily.


Skip




From loewis at informatik.hu-berlin.de  Fri Sep  1 19:02:58 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Fri, 1 Sep 2000 19:02:58 +0200 (MET DST)
Subject: [Python-Dev] DEBUG_SAVEALL feature for gc not in 2.0b1?
Message-ID: <200009011702.TAA26607@pandora.informatik.hu-berlin.de>

> Does this look okay?  Should I try to get it done for 2.0?

I don't see the need for improvement. I consider it a fairly low-level
API, so having bit masks is fine: users dealing with this settings
should know what a bit mask is.

As for the naming of the specific flags: So far, all of them are for
debugging, as would be the proposed DEBUG_SAVEALL. You also have
set/get_threshold, which clearly controls a different kind of setting.

Unless you come up with ten or so additional settings that *must* be
there, I don't see the need for generalizing the API. Why is

  gc.set_option(gc.THRESHOLD, 1000, 100, 10)

so much better than

  gc.set_threshold(1000, 100, 10)

???

Even if you find the need for a better API, it should be possible to
support the current one for a couple more years, no?

Martin




From skip at mojam.com  Fri Sep  1 19:24:58 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 12:24:58 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <39AFD7E5.93C0F437@lemburg.com>
References: <14767.47507.843792.223790@beluga.mojam.com>
	<20000901114201.B5855@kronos.cnri.reston.va.us>
	<39AFD7E5.93C0F437@lemburg.com>
Message-ID: <14767.58986.387449.850867@beluga.mojam.com>

    >> The cPickle.c changes look to be:
    >> * ANSIfication.
    >> * API changes.
    >> * Support for Unicode strings.

    MAL> Huh ? There is support for Unicode objects in Python's cPickle.c...
    MAL> does Zope's version do something different ?

Zope is still running 1.5.2 and thus has a version of cPickle that is at
least that old.  The RCS revision string is

     * $Id: cPickle.c,v 1.72 2000/05/09 18:05:09 jim Exp $

I saw new unicode functions in the Python 2.0 version of cPickle that
weren't in the version distributed with Zope 2.2.1.  Here's a grep buffer
from XEmacs:

    cd /home/dolphin/skip/src/Zope/lib/Components/cPickle/
    grep -n -i unicode cPickle.c /dev/null

    grep finished with no matches found at Fri Sep  1 12:39:57

Skip



From mal at lemburg.com  Fri Sep  1 19:36:17 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 19:36:17 +0200
Subject: [Python-Dev] Verbosity of the Makefile
Message-ID: <39AFE911.927AEDDF@lemburg.com>

This is pure cosmetics, but I found that the latest CVS versions
of the Parser Makefile have become somewhat verbose.

Is this really needed ?

Also, I'd suggest adding a line

.SILENT:

to the top-level Makefile to make possible errors more visible
(without the parser messages the Makefile messages for a clean
run fit on a 25-line display).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Fri Sep  1 19:54:16 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 1 Sep 2000 13:54:16 -0400 (EDT)
Subject: [Python-Dev] Re: Cookie.py security
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<20000830145152.A24581@illuminatus.timo-tasi.org>
Message-ID: <14767.60744.647516.232634@anthem.concentric.net>

>>>>> "timo" ==   <timo at timo-tasi.org> writes:

    timo> Right now, the shortcut 'Cookie.Cookie()' returns an
    timo> instance of the SmartCookie, which uses Pickle.  Most extant
    timo> examples of using the Cookie module use this shortcut.

    timo> We could change 'Cookie.Cookie()' to return an instance of
    timo> SimpleCookie, which does not use Pickle.  Unfortunately,
    timo> this may break existing code (like Mailman), but there is a
    timo> lot of code out there that it won't break.

Not any more!  Around the Mailman 2.0beta5 time frame, I completely
revamped Mailman's cookie stuff because lots of people were having
problems.  One of the things I suspected was that the binary data in
cookies was giving some browsers headaches.  So I took great pains to
make sure that Mailman only passed in carefully crafted string data,
avoiding Cookie.py's pickle stuff.

I use marshal in the application code, and I go further to `hexlify'
the marshaled data (see binascii.hexlify() in Python 2.0).  That way,
I'm further guaranteed that the cookie data will consist only of
characters in the set [0-9A-F], and I don't need to quote the data
(which was another source of browser incompatibility).  I don't think
I've seen any cookie problems reported from people using Mailman
2.0b5.

[Side note: I also changed Mailman to use session cookies by default,
but that probably had no effect on the problems.]

[Side side note: I also had to patch Morsel.OutputString() in my copy
of Cookie.py because there was a test for falseness that should have
been a test for the empty string explicitly.  Otherwise this fails:

    c['foo']['max-age'] = 0

but this succeeds

    c['foo']['max-age'] = "0"

Don't know if that's relevant for Tim's current version.]

    timo> Also, people could still use the SmartCookie and
    timo> SerialCookie classes, but not they would be more likely to
    timo> read them in the documentation because they are "outside the
    timo> beaten path".

My vote would be to get rid of SmartCookie and SerialCookie and stay
with simple string cookie data only.  Applications can do fancier
stuff on their own if they want.

-Barry



From thomas at xs4all.net  Fri Sep  1 20:00:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 20:00:49 +0200
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: <200009011705.MAA10274@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Sep 01, 2000 at 12:05:21PM -0500
References: <20000901163452.N12695@xs4all.nl> <200009011705.MAA10274@cj20424-a.reston1.va.home.com>
Message-ID: <20000901200049.L477@xs4all.nl>

On Fri, Sep 01, 2000 at 12:05:21PM -0500, Guido van Rossum wrote:
> > Works, too. I had a funny experience, though. I tried to quit the
> > interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.

> Really?  It didn't exit?  What had you done before?  I do this all the
> time without problems.

I remember doing 'dir()' and that's it... probably hit a few cursorkeys out
of habit. I was discussing something with a ^@#$*(*#%* suit (the
not-very-intelligent type) and our CEO (who was very interested in the
strange windows, because he thought I was doing something with ADSL :) at the
same time, so I don't remember exactly what I did. I might have hit ^D
before ^Z, though I do remember actively thinking 'must use ^Z' while
starting python, so I don't think so.

When I did roughly the same things after a reboot, all seemed fine. And
yes, I did reboot after installing, before trying things the first time.

> > And then I started IDLE, and IDLE started up, the menus worked, I could open
> > a new window, but I couldn't type anything. And then I had a bluescreen. But
> > after the reboot, everything worked fine, even doing the exact same things.
> > 
> > Could just be windows crashing on me, it does that often enough, even on
> > freshly installed machines. Something about bad karma or something ;)

> Well, Fredrik Lundh also had some blue screens which he'd reduced to a
> DECREF of NULL in _tkinter.  Buyt not fixed, so this may still be
> lurking.

The bluescreen came after my entire explorer froze up, so I'm not sure if it
has to do with python crashing. I found it particularly weird that my
'python' interpreter wouldn't exit, and the IDLE windows were working (ie,
Tk working) but not accepting input -- they shouldn't interfere with each
other, should they ?

My laptop is reasonably stable, though somethines has some strange glitches
when viewing avi/mpeg's, in particular DVD uhm, 'backups'. But I'm used to
Windows crashing whenever I touch it, so all in all, I think this:

> On the other hand your laptop might have been screwy already by that
> time...

Since all was fine after a reboot, even doing roughly the same things. I'll
see if I can hit it again sometime this weekend. (A full weekend of Python
and Packing ! No work ! Yes!) And I'll do my girl a favor and install
PySol, so she can give it a good testing :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Sep  1 21:34:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 14:34:33 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 19:36:17 +0200."
             <39AFE911.927AEDDF@lemburg.com> 
References: <39AFE911.927AEDDF@lemburg.com> 
Message-ID: <200009011934.OAA02358@cj20424-a.reston1.va.home.com>

> This is pure cosmetics, but I found that the latest CVS versions
> of the Parser Makefile have become somewhat verbose.
> 
> Is this really needed ?

Like what?  What has been added?

> Also, I'd suggest adding a line
> 
> .SILENT:
> 
> to the top-level Makefile to make possible errors more visible
> (without the parser messages the Makefile messages for a clean
> run fit on a 25-line display).

I tried this, and it's to quiet -- you don't know what's going on at
all any more.  If you like this, just say "make -s".

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Fri Sep  1 20:36:37 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 20:36:37 +0200
Subject: [Python-Dev] Verbosity of the Makefile
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com>
Message-ID: <39AFF735.F9F3A252@lemburg.com>

Guido van Rossum wrote:
> 
> > This is pure cosmetics, but I found that the latest CVS versions
> > of the Parser Makefile have become somewhat verbose.
> >
> > Is this really needed ?
> 
> Like what?  What has been added?

I was referring to this output:

making Makefile in subdirectory Modules
Compiling (meta-) parse tree into NFA grammar
Making DFA for 'single_input' ...
Making DFA for 'file_input' ...
Making DFA for 'eval_input' ...
Making DFA for 'funcdef' ...
Making DFA for 'parameters' ...
Making DFA for 'varargslist' ...
Making DFA for 'fpdef' ...
Making DFA for 'fplist' ...
Making DFA for 'stmt' ...
Making DFA for 'simple_stmt' ...
Making DFA for 'small_stmt' ...
...
Making DFA for 'list_for' ...
Making DFA for 'list_if' ...
Adding FIRST sets ...
Writing graminit.c ...
Writing graminit.h ...
 
> > Also, I'd suggest adding a line
> >
> > .SILENT:
> >
> > to the top-level Makefile to make possible errors more visible
> > (without the parser messages the Makefile messages for a clean
> > run fit on a 25-line display).
> 
> I tried this, and it's to quiet -- you don't know what's going on at
> all any more.  If you like this, just say "make -s".

I know, that's what I have in my .aliases file... just thought
that it might be better to only see problems rather than hundreds
of OS commands.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Fri Sep  1 20:58:41 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 20:58:41 +0200
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <39AFF735.F9F3A252@lemburg.com>; from mal@lemburg.com on Fri, Sep 01, 2000 at 08:36:37PM +0200
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com>
Message-ID: <20000901205841.O12695@xs4all.nl>

On Fri, Sep 01, 2000 at 08:36:37PM +0200, M.-A. Lemburg wrote:

> making Makefile in subdirectory Modules
> Compiling (meta-) parse tree into NFA grammar
> Making DFA for 'single_input' ...
> Making DFA for 'file_input' ...
> Making DFA for 'eval_input' ...
> Making DFA for 'funcdef' ...
> Making DFA for 'parameters' ...
> Making DFA for 'varargslist' ...
> Making DFA for 'fpdef' ...
> Making DFA for 'fplist' ...
> Making DFA for 'stmt' ...
> Making DFA for 'simple_stmt' ...
> Making DFA for 'small_stmt' ...
> ...
> Making DFA for 'list_for' ...
> Making DFA for 'list_if' ...
> Adding FIRST sets ...
> Writing graminit.c ...
> Writing graminit.h ...

How about just removing the Grammar rule in releases ? It's only useful for
people fiddling with the Grammar, and we had a lot of those fiddles in the
last few weeks. It's not really necessary to rebuild the grammar after each
reconfigure (which is basically what the Grammar does.)

Repetitively-y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Sep  1 22:11:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 15:11:02 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 20:36:37 +0200."
             <39AFF735.F9F3A252@lemburg.com> 
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com>  
            <39AFF735.F9F3A252@lemburg.com> 
Message-ID: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>

> I was referring to this output:
> 
> making Makefile in subdirectory Modules
> Compiling (meta-) parse tree into NFA grammar
> Making DFA for 'single_input' ...
> Making DFA for 'file_input' ...
> Making DFA for 'eval_input' ...
> Making DFA for 'funcdef' ...
> Making DFA for 'parameters' ...
> Making DFA for 'varargslist' ...
> Making DFA for 'fpdef' ...
> Making DFA for 'fplist' ...
> Making DFA for 'stmt' ...
> Making DFA for 'simple_stmt' ...
> Making DFA for 'small_stmt' ...
> ...
> Making DFA for 'list_for' ...
> Making DFA for 'list_if' ...
> Adding FIRST sets ...
> Writing graminit.c ...
> Writing graminit.h ...

This should only happen after "make clean" right?  If it annoys you,
we could add >/dev/null to the pgen rule.

> > > Also, I'd suggest adding a line
> > >
> > > .SILENT:
> > >
> > > to the top-level Makefile to make possible errors more visible
> > > (without the parser messages the Makefile messages for a clean
> > > run fit on a 25-line display).
> > 
> > I tried this, and it's to quiet -- you don't know what's going on at
> > all any more.  If you like this, just say "make -s".
> 
> I know, that's what I have in my .aliases file... just thought
> that it might be better to only see problems rather than hundreds
> of OS commands.

-1.  It's too silent to be a good default.  Someone who first unpacks
and builds Python and is used to building other projects would wonder
why make is "hanging" without printing anything.  I've never seen a
Makefile that had this right out of the box.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From nascheme at enme.ucalgary.ca  Fri Sep  1 22:21:36 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 14:21:36 -0600
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Fri, Sep 01, 2000 at 03:11:02PM -0500
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
Message-ID: <20000901142136.A8205@keymaster.enme.ucalgary.ca>

I'm going to pipe up again about non-recursive makefiles being a good
thing.  This is another reason.

  Neil



From guido at beopen.com  Fri Sep  1 23:48:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 16:48:02 -0500
Subject: [Python-Dev] threadmodule.c comment error? (from comp.lang.python)
In-Reply-To: Your message of "Fri, 01 Sep 2000 00:47:03 +0200."
             <00d001c0139d$7be87900$766940d5@hagrid> 
References: <00d001c0139d$7be87900$766940d5@hagrid> 
Message-ID: <200009012148.QAA08086@cj20424-a.reston1.va.home.com>

> the parse tuple string doesn't quite match the error message
> given if the 2nd argument isn't a tuple.  on the other hand, the
> args argument is initialized to NULL...

I was puzzled until I realized that you mean that error lies about
the 2nd arg being optional.

I'll remove the word "optional" from the message.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 22:58:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 22:58:50 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009011631.LAA09876@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 11:31:26 AM
Message-ID: <200009012058.WAA28061@python.inrialpes.fr>

Aha. Thanks for the explanation.

Guido van Rossum wrote:
> 
> Thanks, Marc-Andre, for pointing out that Fred's lookdict code is
> actually an improvement.

Right. I was too fast. There is some speedup due to the string
specialization. I'll post a patch to SF with some more tweaks
of this implementation. Briefly:

- do not call PyErr_Clear() systematically after PyObject_Compare();
  only if (!error_restore && PyErr_Occurred())
- defer variable initializations after common return cases
- avoid using more vars in lookdict_string + specialize string_compare()
- inline the most frequest case in PyDict_GetItem (the first item probe)

> The reason for all this is that we found that lookdict() calls
> PyObject_Compare() without checking for errors.  If there's a key that
> raises an error when compared to another key, the keys compare unequal
> and an exception is set, which may disturb an exception that the
> caller of PyDict_GetItem() might be calling.  PyDict_GetItem() is
> documented as never raising an exception.  This is actually not strong
> enough; it was actually intended to never clear an exception either.
> The potential errors from PyObject_Compare() violate this contract.
> Note that these errors are nothing new; PyObject_Compare() has been
> able to raise exceptions for a long time, e.g. from errors raised by
> __cmp__().
> 
> The first-order fix is to call PyErr_Fetch() and PyErr_restore()
> around the calls to PyObject_Compare().  This is slow (for reasons
> Vladimir points out) even though Fred was very careful to only call
> PyErr_Fetch() or PyErr_Restore() when absolutely necessary and only
> once per lookdict call.  The second-order fix therefore is Fred's
> specialization for string-keys-only dicts.
> 
> There's another problem: as fixed, lookdict needs a current thread
> state!  (Because the exception state is stored per thread.)  There are
> cases where PyDict_GetItem() is called when there's no thread state!
> The first one we found was Tim Peters' patch for _PyPclose (see
> separate message).  There may be others -- we'll have to fix these
> when we find them (probably after 2.0b1 is released but hopefully
> before 2.0 final).

Hm. Question: is it possible for the thread state to swap during
PyObject_Compare()? If it is possible, things are more complicated
than I thought...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:08:14 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:08:14 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901095627.B5571@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 09:56:27 AM
Message-ID: <200009012108.XAA28091@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> I didn't submit the patch to SF yet because I am thinking of redesigning
> the gc module API.  I really don't like the current bitmask interface
> for setting options.

Why? There's nothing wrong with it.

> 
> Does anyone have any ideas on a good interface for setting various GC
> options?  There may be many options and they may change with the
> evolution of the collector.  My current idea is to use something like:
> 
>     gc.get_option(<name>)
> 
>     gc.set_option(<name>, <value>, ...)
> 
> with the module defining constants for options.  For example:
> 
>     gc.set_option(gc.DEBUG_LEAK, 1)
> 
> would enable leak debugging.  Does this look okay?  Should I try to get
> it done for 2.0?

This is too much. Don't worry, it's perfect as is.
Also, I support the idea of exporting the collected garbage for
debugging -- haven't looked at the patch though. Is it possible
to collect it subsequently?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Sat Sep  2 00:04:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 17:04:48 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 22:58:50 +0200."
             <200009012058.WAA28061@python.inrialpes.fr> 
References: <200009012058.WAA28061@python.inrialpes.fr> 
Message-ID: <200009012204.RAA08266@cj20424-a.reston1.va.home.com>

> Right. I was too fast. There is some speedup due to the string
> specialization. I'll post a patch to SF with some more tweaks
> of this implementation. Briefly:
> 
> - do not call PyErr_Clear() systematically after PyObject_Compare();
>   only if (!error_restore && PyErr_Occurred())

What do you mean?  The lookdict code checked in already checks
PyErr_Occurrs().

> - defer variable initializations after common return cases
> - avoid using more vars in lookdict_string + specialize string_compare()
> - inline the most frequest case in PyDict_GetItem (the first item probe)

Cool.

> Hm. Question: is it possible for the thread state to swap during
> PyObject_Compare()? If it is possible, things are more complicated
> than I thought...

Doesn't matter -- it will always swap back.  It's tied to the
interpreter lock.

Now, for truly devious code dealing with the lock and thread state,
see the changes to _PyPclose() that Tim Peters just checked in...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:16:23 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:16:23 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009012204.RAA08266@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 05:04:48 PM
Message-ID: <200009012116.XAA28130@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> > Right. I was too fast. There is some speedup due to the string
> > specialization. I'll post a patch to SF with some more tweaks
> > of this implementation. Briefly:
> > 
> > - do not call PyErr_Clear() systematically after PyObject_Compare();
> >   only if (!error_restore && PyErr_Occurred())
> 
> What do you mean?  The lookdict code checked in already checks
> PyErr_Occurrs().

Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
PyErr_Occurred() is called systematically after PyObject_Compare()
and it will evaluate to true even if the error was previously fetched.

So I mean that the test for detecting whether a *new* exception is
raised by PyObject_Compare() is (!error_restore && PyErr_Occurred())
because error_restore is set only when there's a previous exception
in place (before the call to Object_Compare). And only in this case
we need to clear the new error.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nascheme at enme.ucalgary.ca  Fri Sep  1 23:36:12 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 15:36:12 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009012108.XAA28091@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Sep 01, 2000 at 11:08:14PM +0200
References: <20000901095627.B5571@keymaster.enme.ucalgary.ca> <200009012108.XAA28091@python.inrialpes.fr>
Message-ID: <20000901153612.A9121@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
> Also, I support the idea of exporting the collected garbage for
> debugging -- haven't looked at the patch though. Is it possible
> to collect it subsequently?

No.  Once objects are in gc.garbage they are back under the users
control.  How do you see things working otherwise?

  Neil



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:47:59 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:47:59 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901153612.A9121@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 03:36:12 PM
Message-ID: <200009012147.XAA28215@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
> > Also, I support the idea of exporting the collected garbage for
> > debugging -- haven't looked at the patch though. Is it possible
> > to collect it subsequently?
> 
> No.  Once objects are in gc.garbage they are back under the users
> control.  How do you see things working otherwise?

By putting them in gc.collected_garbage. The next collect() should be
able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
see any problems with this?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Sat Sep  2 00:43:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 17:43:29 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 23:16:23 +0200."
             <200009012116.XAA28130@python.inrialpes.fr> 
References: <200009012116.XAA28130@python.inrialpes.fr> 
Message-ID: <200009012243.RAA08429@cj20424-a.reston1.va.home.com>

> > > - do not call PyErr_Clear() systematically after PyObject_Compare();
> > >   only if (!error_restore && PyErr_Occurred())
> > 
> > What do you mean?  The lookdict code checked in already checks
> > PyErr_Occurrs().
> 
> Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
> PyErr_Occurred() is called systematically after PyObject_Compare()
> and it will evaluate to true even if the error was previously fetched.

No, PyErr_Fetch() clears the exception!  PyErr_Restore() restores it.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:51:47 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:51:47 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009012243.RAA08429@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 05:43:29 PM
Message-ID: <200009012151.XAA28257@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> > > > - do not call PyErr_Clear() systematically after PyObject_Compare();
> > > >   only if (!error_restore && PyErr_Occurred())
> > > 
> > > What do you mean?  The lookdict code checked in already checks
> > > PyErr_Occurrs().
> > 
> > Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
> > PyErr_Occurred() is called systematically after PyObject_Compare()
> > and it will evaluate to true even if the error was previously fetched.
> 
> No, PyErr_Fetch() clears the exception!  PyErr_Restore() restores it.

Oops, right. This saves a function call, then. Still good.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Fri Sep  1 23:53:09 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 17:53:09 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
Message-ID: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>

As below, except the new file is

    /pub/windows/beopen-python2b1p2-20000901.exe
    5,783,115 bytes

still from anonymous FTP at python.beopen.com.  The p1 version has been
removed.

+ test_popen2 should work on Windows 2000 now (turned out that,
  as feared, MS "more" doesn't work the same way across Windows
  flavors).

+ Minor changes to the installer.

+ New LICENSE.txt and README.txt in the root of your Python
  installation.

+ Whatever other bugfixes people committed in the 8 hours since
  2b1p1 was built.

Thanks for the help so far!  We've learned that things are generally working
well, on Windows 2000 the correct one of "admin" or "non-admin" install
works & is correctly triggered by whether the user has admin privileges, and
that Thomas's Win98FE suffers infinitely more blue-screen deaths than Tim's
Win98SE ever did <wink>.

Haven't heard from anyone on Win95, Windows Me, or Windows NT yet.  And I'm
downright eager to ignore Win64 for now.

-----Original Message-----
Sent: Friday, September 01, 2000 7:35 AM
To: PythonDev; Audun.Runde at sas.com
Cc: audun at mindspring.com
Subject: [Python-Dev] Prerelease Python fun on Windows!


A prerelease of the Python2.0b1 Windows installer is now available via
anonymous FTP, from

    python.beopen.com

file

    /pub/windows/beopen-python2b1p1-20000901.exe
    5,766,988 bytes

Be sure to set FTP Binary mode before you get it.

This is not *the* release.  Indeed, the docs are still from some old
pre-beta version of Python 1.6 (sorry, Fred, but I'm really sleepy!).  What
I'm trying to test here is the installer, and the basic integrity of the
installation.  A lot has changed, and we hope all for the better.

Points of particular interest:

+ I'm running a Win98SE laptop.  The install works great for me.  How
  about NT?  2000?  95?  ME?  Win64 <shudder>?

+ For the first time ever, the Windows installer should *not* require
  adminstrator privileges under NT or 2000.  This is untested.  If you
  log in as an adminstrator, it should write Python's registry info
  under HKEY_LOCAL_MACHINE.  If not an adminstrator, it should pop up
  an informative message and write the registry info under
  HKEY_CURRENT_USER instead.  Does this work?  This prerelease includes
  a patch from Mark Hammond that makes Python look in HKCU before HKLM
  (note that that also allows users to override the HKLM settings, if
  desired).

+ Try
    python lib/test/regrtest.py

  test_socket is expected to fail if you're not on a network, or logged
  into your ISP, at the time your run the test suite.  Otherwise
  test_socket is expected to pass.  All other tests are expected to
  pass (although, as always, a number of Unix-specific tests should get
  skipped).

+ Get into a DOS-box Python, and try

      import Tkinter
      Tkinter._test()

  This installation of Python should not interfere with, or be damaged
  by, any other installation of Tcl/Tk you happen to have lying around.
  This is also the first time we're using Tcl/Tk 8.3.2, and that needs
  wider testing too.

+ If the Tkinter test worked, try IDLE!
  Start -> Programs -> Python20 -> IDLE.

+ There is no time limit on this installation.  But if you use it for
  more than 30 days, you're going to have to ask us to pay you <wink>.

windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim



_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://www.python.org/mailman/listinfo/python-dev





From skip at mojam.com  Sat Sep  2 00:08:05 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 17:08:05 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901153612.A9121@keymaster.enme.ucalgary.ca>
References: <20000901095627.B5571@keymaster.enme.ucalgary.ca>
	<200009012108.XAA28091@python.inrialpes.fr>
	<20000901153612.A9121@keymaster.enme.ucalgary.ca>
Message-ID: <14768.10437.352066.987557@beluga.mojam.com>

>>>>> "Neil" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

    Neil> On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
    >> Also, I support the idea of exporting the collected garbage for
    >> debugging -- haven't looked at the patch though. Is it possible
    >> to collect it subsequently?

    Neil> No.  Once objects are in gc.garbage they are back under the users
    Neil> control.  How do you see things working otherwise?

Can't you just turn off gc.DEBUG_SAVEALL and reinitialize gc.garbage to []?

Skip




From nascheme at enme.ucalgary.ca  Sat Sep  2 00:10:32 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 16:10:32 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009012147.XAA28215@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Sep 01, 2000 at 11:47:59PM +0200
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca> <200009012147.XAA28215@python.inrialpes.fr>
Message-ID: <20000901161032.B9121@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
> By putting them in gc.collected_garbage. The next collect() should be
> able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
> see any problems with this?

I don't really see the point.  If someone has set the SAVEALL flag then
they are obviously debugging a program.  I don't see much point
in the GC cleaning up this garbage.  The user can do it if they like.

I have an idea for an alternate interface.  What if there was a
gc.handle_garbage hook which could be set to a function?  The collector
would pass garbage objects to this function one at a time.  If the
function returns true then it means that the garbage was handled and the
collector should not call tp_clear.  These handlers could be chained
together like import hooks.  The default handler would simply append to
the gc.garbage list.  If a debugging flag was set then all found garbage
would be passed to this function rather than just uncollectable garbage.

Skip, would a hook like this be useful to you?

  Neil



From trentm at ActiveState.com  Sat Sep  2 00:15:13 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 1 Sep 2000 15:15:13 -0700
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Sep 01, 2000 at 05:53:09PM -0400
References: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>
Message-ID: <20000901151513.B14097@ActiveState.com>

On Fri, Sep 01, 2000 at 05:53:09PM -0400, Tim Peters wrote:
> And I'm
> downright eager to ignore Win64 for now.

Works for me!

I won't get a chance to look at this for a while.

Trent


-- 
Trent Mick
TrentM at ActiveState.com



From gward at mems-exchange.org  Sat Sep  2 02:56:47 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 1 Sep 2000 20:56:47 -0400
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <20000901142136.A8205@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Fri, Sep 01, 2000 at 02:21:36PM -0600
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com> <20000901142136.A8205@keymaster.enme.ucalgary.ca>
Message-ID: <20000901205647.A27038@ludwig.cnri.reston.va.us>

On 01 September 2000, Neil Schemenauer said:
> I'm going to pipe up again about non-recursive makefiles being a good
> thing.  This is another reason.

+1 in principle.  I suspect un-recursifying Python's build system would
be a pretty conclusive demonstration of whether the "Recursive Makefiles
Considered Harmful" thesis hold water.  Want to try to hack something
together one of these days?  (Probably not for 2.0, though.)

        Greg



From m.favas at per.dem.csiro.au  Sat Sep  2 03:15:11 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 02 Sep 2000 09:15:11 +0800
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AEBD4A.55ABED9E@per.dem.csiro.au>
		<39AE07FF.478F413@per.dem.csiro.au>
		<14766.14278.609327.610929@anthem.concentric.net>
		<39AEBD01.601F7A83@per.dem.csiro.au> <14766.59597.713039.633184@anthem.concentric.net>
Message-ID: <39B0549F.DA8D07A8@per.dem.csiro.au>

"Barry A. Warsaw" wrote:
> Thanks to a quick chat with Tim, who is always quick to grasp the meat
> of the issue, we realize we need to & 0xffffffff all the 32 bit
> unsigned ints we're reading out of the .mo files.  I'll work out a
> patch, and check it in after a test on 32-bit Linux.  Watch for it,
> and please try it out on your box.

Yep - works fine on my 64-bitter (well, it certainly passes the test
<grin>)

Mark



From skip at mojam.com  Sat Sep  2 04:03:51 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 21:03:51 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901161032.B9121@keymaster.enme.ucalgary.ca>
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca>
	<200009012147.XAA28215@python.inrialpes.fr>
	<20000901161032.B9121@keymaster.enme.ucalgary.ca>
Message-ID: <14768.24583.622144.16075@beluga.mojam.com>

    Neil> On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
    >> By putting them in gc.collected_garbage. The next collect() should be
    >> able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
    >> see any problems with this?

    Neil> I don't really see the point.  If someone has set the SAVEALL flag
    Neil> then they are obviously debugging a program.  I don't see much
    Neil> point in the GC cleaning up this garbage.  The user can do it if
    Neil> they like.

Agreed.

    Neil> I have an idea for an alternate interface.  What if there was a
    Neil> gc.handle_garbage hook which could be set to a function?  The
    Neil> collector would pass garbage objects to this function one at a
    Neil> time.  If the function returns true then it means that the garbage
    Neil> was handled and the collector should not call tp_clear.  These
    Neil> handlers could be chained together like import hooks.  The default
    Neil> handler would simply append to the gc.garbage list.  If a
    Neil> debugging flag was set then all found garbage would be passed to
    Neil> this function rather than just uncollectable garbage.

    Neil> Skip, would a hook like this be useful to you?

Sounds too complex for my feeble brain... ;-)

What's the difference between "found garbage" and "uncollectable garbage"?
What sort of garbage are you appending to gc.garbage now?  I thought by the
very nature of your garbage collector, anything it could free was otherwise
"uncollectable".

S



From effbot at telia.com  Sat Sep  2 11:31:04 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 11:31:04 +0200
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
References: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>
Message-ID: <007901c014c0$852eff60$766940d5@hagrid>

tim wrote:
> Thomas's Win98FE suffers infinitely more blue-screen deaths than Tim's
> Win98SE ever did <wink>.

just fyi, Tkinter seems to be extremely unstable on Win95 and
Win98FE (when shut down, the python process grabs the key-
board and hangs.  the only way to kill the process is to reboot)

the same version of Tk (wish) works just fine...

</F>




From effbot at telia.com  Sat Sep  2 13:32:31 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 13:32:31 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz> <39AF6C4C.62451C87@lemburg.com>
Message-ID: <01b201c014d1$7c081a00$766940d5@hagrid>

mal wrote:
> I gave some examples in the other pragma thread. The main
> idea behind "declare" is to define flags at compilation
> time, the encoding of string literals being one of the
> original motivations for introducing these flags:
>
> declare encoding = "latin-1"
> x = u"This text will be interpreted as Latin-1 and stored as Unicode"
>
> declare encoding = "ascii"
> y = u"This is supposed to be ASCII, but contains ??? Umlauts - error !"

-1

for sanity's sake, we should only allow a *single* encoding per
source file.  anything else is madness.

besides, the goal should be to apply the encoding to the entire
file, not just the contents of string literals.

(hint: how many editing and display environments support multiple
encodings per text file?)

</F>




From mal at lemburg.com  Sat Sep  2 16:01:15 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 02 Sep 2000 16:01:15 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz> <39AF6C4C.62451C87@lemburg.com> <01b201c014d1$7c081a00$766940d5@hagrid>
Message-ID: <39B1082B.4C9AB44@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > I gave some examples in the other pragma thread. The main
> > idea behind "declare" is to define flags at compilation
> > time, the encoding of string literals being one of the
> > original motivations for introducing these flags:
> >
> > declare encoding = "latin-1"
> > x = u"This text will be interpreted as Latin-1 and stored as Unicode"
> >
> > declare encoding = "ascii"
> > y = u"This is supposed to be ASCII, but contains ??? Umlauts - error !"
> 
> -1

On the "declare" concept or just the above examples ?
 
> for sanity's sake, we should only allow a *single* encoding per
> source file.  anything else is madness.

Uhm, the above was meant as two *separate* examples. I completely
agree that multiple encodings per file should not be allowed
(this would be easy to implement in the compiler).
 
> besides, the goal should be to apply the encoding to the entire
> file, not just the contents of string literals.

I'm not sure this is a good idea. 

The only parts where the encoding matters are string
literals (unless I've overlooked some important detail).
All other parts which could contain non-ASCII text such as
comments are not seen by the compiler.

So all source code encodings should really be ASCII supersets
(even if just to make editing them using a plain 8-bit editor
sane).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 16:07:52 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 16:07:52 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901161032.B9121@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 04:10:32 PM
Message-ID: <200009021407.QAA29710@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
> > By putting them in gc.collected_garbage. The next collect() should be
> > able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
> > see any problems with this?
> 
> I don't really see the point.  If someone has set the SAVEALL flag then
> they are obviously debugging a program.  I don't see much point
> in the GC cleaning up this garbage.  The user can do it if they like.

The point is that we have two types of garbage: collectable and
uncollectable. Uncollectable garbage is already saved in gc.garbage
with or without debugging.

Uncollectable garbage is the most harmful. Fixing the program to
avoid that garbage is supposed to have top-ranked priority.

The discussion now goes on taking that one step further, i.e.
make sure that no cycles are created at all, ever. This is what
Skip wants. Skip wants to have access to the collectable garbage and
cleanup at best the code w.r.t. cycles. Fine, but collectable garbage
is priority 2 and mixing the two types of garbage is not nice. It is
not nice because the collector can deal with collectable garbage, but
gives up on the uncollectable one. This distinction in functionality
is important.

That's why I suggested to save the collectable garbage in gc.collected.

In this context, the name SAVEALL is a bit misleading. Uncollectable
garbage is already saved. What's missing is a flag & support to save
the collectable garbage. SAVECOLLECTED is a name on target.

Further, the collect() function should be able to clear gc.collected
if it is not empty and if SAVEUNCOLLECTED is not set. This should not
be perceived as a big deal, though. I see it as a nicety for overall
consistency.

> 
> I have an idea for an alternate interface.  What if there was a
> gc.handle_garbage hook which could be set to a function?  The collector
> would pass garbage objects to this function one at a time.

This is too much. The idea here is to detect garbage earlier, but given
that one can set gc.threshold(1,0,0), thus invoking the collector on
every allocation, one gets the same effect with DEBUG_LEAK. There's
little to no added value.

Such hook may also exercise the latest changes Jeremy checked in:
if an exception is raised after GC, Python will scream at you with
a fatal error. I don't think it's a good idea to mix Python and C too
much for such a low-level machinery as the garbage collector.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nascheme at enme.ucalgary.ca  Sat Sep  2 16:08:48 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 2 Sep 2000 08:08:48 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14768.24583.622144.16075@beluga.mojam.com>; from Skip Montanaro on Fri, Sep 01, 2000 at 09:03:51PM -0500
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca> <200009012147.XAA28215@python.inrialpes.fr> <20000901161032.B9121@keymaster.enme.ucalgary.ca> <14768.24583.622144.16075@beluga.mojam.com>
Message-ID: <20000902080848.A13169@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 09:03:51PM -0500, Skip Montanaro wrote:
> What's the difference between "found garbage" and "uncollectable garbage"?

I use the term uncollectable garbage for objects that the collector
cannot call tp_clear on because of __del__ methods.  These objects are
added to gc.garbage (actually, just the instances).  If SAVEALL is
enabled then all objects found are saved in gc.garbage and tp_clear is
not called.

Here is an example of how to use my proposed handle_garbage hook:

	class Vertex:
		def __init__(self):
			self.edges = []
		def add_edge(self, e):
			self.edges.append(e)
		def __del__(self):
			do_something()

	class Edge:
		def __init__(self, vertex_in, vertex_out):
			self.vertex_in = vertex_in
			vertex_in.add_edget(self)
			self.vertex_out = vertex_out
			vertex_out.add_edget(self)
			
This graph structure contains cycles and will not be collected by
reference counting.  It is also "uncollectable" because it contains a
finalizer on a strongly connected component (ie. other objects in the
cycle are reachable from the __del__ method).  With the current garbage
collector, instances of Edge and Vertex will appear in gc.garbage when
found to be unreachable by the rest of Python.  The application could
then periodicly do:

	for obj in gc.garbage:
		if isinstance(obj, Vertex):
			obj.__dict__.clear()

which would break the reference cycles.  If a handle_garbage hook
existed the application could do:

	def break_graph_cycle(obj, next=gc.handle_garbage):
		if isinstance(obj, Vertex):
			obj.__dict__.clear()
			return 1
		else:
			return next(obj)
	gc.handle_garbage = break_graph_cycle

If you had a leaking program you could use this hook to debug it:

	def debug_cycle(obj, next=gc.handle_garbage):
		print "garbage:", repr(obj)
		return gc.handle_garbage

The hook seems to be more general than the gc.garbage list.

  Neil



	





> What sort of garbage are you appending to gc.garbage now?  I thought by the
> very nature of your garbage collector, anything it could free was otherwise
> "uncollectable".
> 
> S



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 16:37:18 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 16:37:18 +0200 (CEST)
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <20000901094821.A5571@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 09:48:21 AM
Message-ID: <200009021437.QAA29774@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
> > Even people who do have problems with cyclic garbage don't necessarily
> > need a collection every 100 allocations.  (Is my understanding of what
> > the threshold measures correct?)
> 
> It collects every net threshold0 allocations.  If you create and delete
> 1000 container objects in a loop then no collection would occur.
> 
> > But the difference in total memory consumption with the threshold at
> > 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

A few megabytes?  Phew! Jeremy -- more power mem to you!
I agree with Neil. 5000 is too high and the purpose of the inclusion
of the collector in the beta is precisely to exercise it & get feedback!
With a threshold of 5000 you've almost disabled the collector, leaving us
only with the memory overhead and the slowdown <wink>.

In short, bring it back to something low, please.

[Neil]
> A portable way to find the total allocated memory would be nice.
> Perhaps Vladimir's malloc will help us here.

Yep, the mem profiler. The profiler currently collects stats if
enabled. This is slow and unusable in production code. But if the
profiler is disabled, Python runs at full speed. However, the profiler
will include an interface which will ask the mallocs on how much real
mem they manage. This is not implemented yet... Maybe the real mem
interface should go in a separate 'memory' module; don't know yet.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 17:00:47 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 17:00:47 +0200 (CEST)
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 01, 2000 05:53:09 PM
Message-ID: <200009021500.RAA00776@python.inrialpes.fr>

Tim Peters wrote:
> 
> As below, except the new file is
> 
>     /pub/windows/beopen-python2b1p2-20000901.exe
>     5,783,115 bytes
> 
> still from anonymous FTP at python.beopen.com.  The p1 version has been
> removed.

In case my feedback matters, being a Windows amateur, the installation
went smoothly on my home P100 with some early Win95 pre-release. In the
great Windows tradition, I was asked to reboot & did so. The regression
tests passed in console mode. Then launched successfully IDLE. In IDLE
I get *beep* sounds every time I hit RETURN without typing anything.
I was able to close both the console and IDLE without problems. Haven't
tried the uninstall link, though.

don't-ask-me-any-questions-about-Windows'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Sat Sep  2 17:56:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 10:56:30 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 20:56:47 -0400."
             <20000901205647.A27038@ludwig.cnri.reston.va.us> 
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com> <20000901142136.A8205@keymaster.enme.ucalgary.ca>  
            <20000901205647.A27038@ludwig.cnri.reston.va.us> 
Message-ID: <200009021556.KAA02142@cj20424-a.reston1.va.home.com>

> On 01 September 2000, Neil Schemenauer said:
> > I'm going to pipe up again about non-recursive makefiles being a good
> > thing.  This is another reason.

Greg Ward:
> +1 in principle.  I suspect un-recursifying Python's build system would
> be a pretty conclusive demonstration of whether the "Recursive Makefiles
> Considered Harmful" thesis hold water.  Want to try to hack something
> together one of these days?  (Probably not for 2.0, though.)

To me this seems like a big waste of time.

I see nothing broken with the current setup.  The verbosity is taken
care of by "make -s", for individuals who don't want Make saying
anything.  Another useful option is "make --no-print-directory"; this
removes Make's noisiness about changing directories.  If the pgen
output really bothers you, then let's direct it to /dev/null.  None of
these issues seem to require getting rid of the Makefile recursion.

If it ain't broken, don't fix it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sat Sep  2 18:00:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 11:00:29 -0500
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: Your message of "Sat, 02 Sep 2000 17:00:47 +0200."
             <200009021500.RAA00776@python.inrialpes.fr> 
References: <200009021500.RAA00776@python.inrialpes.fr> 
Message-ID: <200009021600.LAA02199@cj20424-a.reston1.va.home.com>

[Vladimir]

> In IDLE I get *beep* sounds every time I hit RETURN without typing
> anything.

This appears to be a weird side effect of the last change I made in
IDLE:

----------------------------
revision 1.28
date: 2000/03/07 18:51:49;  author: guido;  state: Exp;  lines: +24 -0
Override the Undo delegator to forbid any changes before the I/O mark.
It beeps if you try to insert or delete before the "iomark" mark.
This makes the shell less confusing for newbies.
----------------------------

I hope we can fix this before 2.0 final goes out...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Sat Sep  2 17:09:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 10:09:49 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021407.QAA29710@python.inrialpes.fr>
References: <20000901161032.B9121@keymaster.enme.ucalgary.ca>
	<200009021407.QAA29710@python.inrialpes.fr>
Message-ID: <14769.6205.428574.926100@beluga.mojam.com>

    Vlad> The discussion now goes on taking that one step further, i.e.
    Vlad> make sure that no cycles are created at all, ever. This is what
    Vlad> Skip wants. Skip wants to have access to the collectable garbage
    Vlad> and cleanup at best the code w.r.t. cycles. 

If I read my (patched) version of gcmodule.c correctly, with the
gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not just
the stuff with __del__ methods.  In delete_garbage I see

    if (debug & DEBUG_SAVEALL) {
	    PyList_Append(garbage, op);
    } else {
            ... usual collection business here ...
    }

Skip



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 17:43:05 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 17:43:05 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14769.6205.428574.926100@beluga.mojam.com> from "Skip Montanaro" at Sep 02, 2000 10:09:49 AM
Message-ID: <200009021543.RAA01638@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> If I read my (patched) version of gcmodule.c correctly, with the
> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not just
> the stuff with __del__ methods.

Yes. And you don't know which objects are collectable and which ones
are not by this collector. That is, SAVEALL transforms the collector
in a cycle detector. The collectable and uncollectable objects belong
to two disjoint sets. I was arguing about this distinction, because
collectable garbage is not considered garbage any more, uncollectable
garbage is the real garbage left, but if you think this distinction
doesn't serve you any purpose, fine.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Sat Sep  2 18:05:33 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 18:05:33 +0200
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
Message-ID: <029001c014f7$a203a780$766940d5@hagrid>

paul prescod spotted this discrepancy:

from the documentation:

    start ([group]) 
    end ([group]) 
        Return the indices of the start and end of the
        substring matched by group; group defaults to
        zero (meaning the whole matched substring). Return
        None if group exists but did not contribute to the
        match.

however, it turns out that PCRE doesn't do what it's
supposed to:

>>> import pre
>>> m = pre.match("(a)|(b)", "b")
>>> m.start(1)
-1

unlike SRE:

>>> import sre
>>> m = sre.match("(a)|(b)", "b")
>>> m.start(1)
>>> print m.start(1)
None

this difference breaks 1.6's pyclbr (1.5.2's pyclbr works
just fine with SRE, though...)

:::

should I fix SRE and ask Fred to fix the docs, or should
someone fix pyclbr and maybe even PCRE?

</F>




From guido at beopen.com  Sat Sep  2 19:18:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 12:18:48 -0500
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
In-Reply-To: Your message of "Sat, 02 Sep 2000 18:05:33 +0200."
             <029001c014f7$a203a780$766940d5@hagrid> 
References: <029001c014f7$a203a780$766940d5@hagrid> 
Message-ID: <200009021718.MAA02318@cj20424-a.reston1.va.home.com>

> paul prescod spotted this discrepancy:
> 
> from the documentation:
> 
>     start ([group]) 
>     end ([group]) 
>         Return the indices of the start and end of the
>         substring matched by group; group defaults to
>         zero (meaning the whole matched substring). Return
>         None if group exists but did not contribute to the
>         match.
> 
> however, it turns out that PCRE doesn't do what it's
> supposed to:
> 
> >>> import pre
> >>> m = pre.match("(a)|(b)", "b")
> >>> m.start(1)
> -1
> 
> unlike SRE:
> 
> >>> import sre
> >>> m = sre.match("(a)|(b)", "b")
> >>> m.start(1)
> >>> print m.start(1)
> None
> 
> this difference breaks 1.6's pyclbr (1.5.2's pyclbr works
> just fine with SRE, though...)
> 
> :::
> 
> should I fix SRE and ask Fred to fix the docs, or should
> someone fix pyclbr and maybe even PCRE?

I'd suggest fix SRE and the docs, because -1 is a more useful
indicator for "no match" than None: it has the same type as valid
indices.  It makes it easier to adapt to static typing later.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Sat Sep  2 18:54:57 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 18:54:57 +0200
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
References: <029001c014f7$a203a780$766940d5@hagrid>  <200009021718.MAA02318@cj20424-a.reston1.va.home.com>
Message-ID: <02d501c014fe$88aa8860$766940d5@hagrid>

[me]
> > from the documentation:
> > 
> >     start ([group]) 
> >     end ([group]) 
> >         Return the indices of the start and end of the
> >         substring matched by group; group defaults to
> >         zero (meaning the whole matched substring). Return
> >         None if group exists but did not contribute to the
> >         match.
> > 
> > however, it turns out that PCRE doesn't do what it's
> > supposed to:
> > 
> > >>> import pre
> > >>> m = pre.match("(a)|(b)", "b")
> > >>> m.start(1)
> > -1

[guido]
> I'd suggest fix SRE and the docs, because -1 is a more useful
> indicator for "no match" than None: it has the same type as valid
> indices.  It makes it easier to adapt to static typing later.

sounds reasonable.  I've fixed the code, leaving the docs to Fred.

this should probably go into 1.6 as well, since pyclbr depends on
it (well, I assume it does -- the pyclbr in the current repository
does, but maybe it's only been updated in the 2.0 code base?)

</F>




From jeremy at beopen.com  Sat Sep  2 19:33:47 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Sat, 2 Sep 2000 13:33:47 -0400
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <200009021437.QAA29774@python.inrialpes.fr>
Message-ID: <AJEAKILOCCJMDILAPGJNEEKFCBAA.jeremy@beopen.com>

Vladimir Marangozov wrote:
>Neil Schemenauer wrote:
>>
>> On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
>> > Even people who do have problems with cyclic garbage don't necessarily
>> > need a collection every 100 allocations.  (Is my understanding of what
>> > the threshold measures correct?)
>>
>> It collects every net threshold0 allocations.  If you create and delete
>> 1000 container objects in a loop then no collection would occur.
>>
>> > But the difference in total memory consumption with the threshold at
>> > 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.
>
>A few megabytes?  Phew! Jeremy -- more power mem to you!
>I agree with Neil. 5000 is too high and the purpose of the inclusion
>of the collector in the beta is precisely to exercise it & get feedback!
>With a threshold of 5000 you've almost disabled the collector, leaving us
>only with the memory overhead and the slowdown <wink>.
>
>In short, bring it back to something low, please.

I am happy to bring it to a lower number, but not as low as it was.  I
increased it forgetting that it was net allocations and not simply
allocations.  Of course, it's not exactly net allocations because if
deallocations occur while the count is zero, they are ignored.

My reason for disliking the previous lower threshold is that it causes
frequently collections, even in programs that produce no cyclic garbage.  I
understand the garbage collector to be a supplement to the existing
reference counting mechanism, which we expect to work correctly for most
programs.

The benefit of collecting the cyclic garbage periodically is to reduce the
total amount of memory the process uses, by freeing some memory to be reused
by malloc.  The specific effect on process memory depends on the program's
high-water mark for memory use and how much of that memory is consumed by
cyclic trash.  (GC also allows finalization to occur where it might not have
before.)

In one test I did, the difference between the high-water mark for a program
that run with 3000 GC collections and 300 GC collections was 13MB and 11MB,
a little less than 20%.

The old threshold (100 net allocations) was low enough that most scripts run
several collections during compilation of the bytecode.  The only containers
created during compilation (or loading .pyc files) are the dictionaries that
hold constants.  If the GC is supplemental, I don't believe its threshold
should be set so low that it runs long before any cycles could be created.

The default threshold can be fairly high, because a program that has
problems caused by cyclic trash can set the threshold lower or explicitly
call the collector.  If we assume these programs are less common, there is
no reason to make all programs suffer all of the time.

I have trouble reasoning about the behavior of the pseudo-net allocations
count, but think I would be happier with a higher threshold.  I might find
it easier to understand if the count where of total allocations and
deallocations, with GC occurring every N allocation events.

Any suggestions about what a more reasonable value would be and why it is
reasonable?

Jeremy





From skip at mojam.com  Sat Sep  2 19:43:06 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 12:43:06 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021543.RAA01638@python.inrialpes.fr>
References: <14769.6205.428574.926100@beluga.mojam.com>
	<200009021543.RAA01638@python.inrialpes.fr>
Message-ID: <14769.15402.630192.4454@beluga.mojam.com>

    Vlad> Skip Montanaro wrote:
    >> 
    >> If I read my (patched) version of gcmodule.c correctly, with the
    >> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not
    >> just the stuff with __del__ methods.

    Vlad> Yes. And you don't know which objects are collectable and which
    Vlad> ones are not by this collector. That is, SAVEALL transforms the
    Vlad> collector in a cycle detector. 

Which is precisely what I want.  I'm trying to locate cycles in a
long-running program.  In that environment collectable and uncollectable
garbage are just as bad since I still use 1.5.2 in production.

Skip



From tim_one at email.msn.com  Sat Sep  2 20:20:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 14:20:18 -0400
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <AJEAKILOCCJMDILAPGJNEEKFCBAA.jeremy@beopen.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKDHDAA.tim_one@email.msn.com>

[Neil and Vladimir say a threshold of 5000 is too high!]

[Jeremy says a threshold of 100 is too low!]

[merriment ensues]

> ...
> Any suggestions about what a more reasonable value would be and why
> it is reasonable?
>
> Jeremy

There's not going to be consensus on this, as the threshold is a crude handle on a complex
problem.  That's sure better than *no* handle, but trash behavior is so app-specific that
there simply won't be a killer argument.

In cases like this, the geometric mean of the extreme positions is always the best guess
<0.8 wink>:

>>> import math
>>> math.sqrt(5000 * 100)
707.10678118654755
>>>

So 9 times out of 10 we can run it with a threshold of 707, and 1 out of 10 with 708
<wink>.

Tuning strategies for gc *can* get as complex as OS scheduling algorithms, and for the
same reasons:  you're in the business of predicting the future based on just a few neurons
keeping track of gross summaries of what happened before.  A program can go through many
phases of quite different behavior over its life (like I/O-bound vs compute-bound, or
cycle-happy vs not), and at the phase boundaries past behavior is worse than irrelevant
(it's actively misleading).

So call it 700 for now.  Or 1000.  It's a bad guess at a crude heuristic regardless, and
if we avoid extreme positions we'll probably avoid doing as much harm as we *could* do
<0.9 wink>.  Over time, a more interesting measure may be how much cyclic trash
collections actually recover, and then collect less often the less trash we're finding
(ditto more often when we're finding more).  Another is like that, except replace "trash"
with "cycles (whether trash or not)".  The gross weakness of "net container allocations"
is that it doesn't directly measure what this system was created to do.

These things *always* wind up with dynamic measures, because static ones are just too
crude across apps.  Then the dynamic measures fail at phase boundaries too, and more
gimmicks are added to compensate for that.  Etc.  Over time it will get better for most
apps most of the time.  For now, we want *both* to exercise the code in the field and not
waste too much time, so hasty compromise is good for the beta.

let-a-thousand-thresholds-bloom-ly y'rs  - tim





From tim_one at email.msn.com  Sat Sep  2 20:46:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 14:46:33 -0400
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
In-Reply-To: <02d501c014fe$88aa8860$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEKFHDAA.tim_one@email.msn.com>

[start/end (group)  documented to return None for group that
 didn't participate in the match
 sre does this
 pre actually returned -1
 this breaks pyclbr.py
 Guido sez pre's behavior is better & the docs should be changed
]

[/F]
> sounds reasonable.  I've fixed the code, leaving the docs to Fred.
>
> this should probably go into 1.6 as well, since pyclbr depends on
> it (well, I assume it does -- the pyclbr in the current repository
> does, but maybe it's only been updated in the 2.0 code base?)

Good point.  pyclbr got changed last year, to speed it and make it more robust for IDLE's
class browser display.  Which has another curious role to play in this screwup!  When
rewriting pyclbr's parsing, I didn't remember what start(group) would do for a
non-existent group.  In the old days I would have looked up the docs.  But since I had
gotten into the habit of *living* in an IDLE box all day, I just tried it instead and
"ah! -1 ... makes sense, I'll use that" was irresistible.  Since any code relying on the
docs would not have worked (None is the wrong type, and even the wrong value viewed as
boolean), the actual behavior should indeed win here.





From cgw at fnal.gov  Sat Sep  2 17:27:53 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Sat, 2 Sep 2000 10:27:53 -0500 (CDT)
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009021556.KAA02142@cj20424-a.reston1.va.home.com>
References: <39AFE911.927AEDDF@lemburg.com>
	<200009011934.OAA02358@cj20424-a.reston1.va.home.com>
	<39AFF735.F9F3A252@lemburg.com>
	<200009012011.PAA02974@cj20424-a.reston1.va.home.com>
	<20000901142136.A8205@keymaster.enme.ucalgary.ca>
	<20000901205647.A27038@ludwig.cnri.reston.va.us>
	<200009021556.KAA02142@cj20424-a.reston1.va.home.com>
Message-ID: <14769.7289.688557.827915@buffalo.fnal.gov>

Guido van Rossum writes:

 > To me this seems like a big waste of time.
 > I see nothing broken with the current setup. 

I've built Python on every kind of system we have at FNAL, which means
Linux, several versions of Solaris, IRIX, DEC^H^H^HCompaq OSF/1, even
(shudder) WinNT, and the only complaint I've ever had with the build
system is that it doesn't do a "make depend" automatically.  (I don't
care too much about the dependencies on system headers, but the
Makefiles should at least know about the dependencies on Python's own
.h files, so when you change something like opcode.h or node.h it is
properly handled.  Fred got bitten by this when he tried to apply the
EXTENDED_ARG patch.)

Personally, I think that the "recurive Mke considered harmful" paper
is a bunch of hot air.  Many highly successful projects - the Linux
kernel, glibc, etc - use recursive Make.

 > If it ain't broken, don't fix it!

Amen!



From cgw at fnal.gov  Fri Sep  1 21:19:58 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 14:19:58 -0500 (CDT)
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
References: <39AFE911.927AEDDF@lemburg.com>
	<200009011934.OAA02358@cj20424-a.reston1.va.home.com>
	<39AFF735.F9F3A252@lemburg.com>
	<200009012011.PAA02974@cj20424-a.reston1.va.home.com>
Message-ID: <14768.350.21353.538473@buffalo.fnal.gov>

For what it's worth, lots of verbosity in the Makefile makes me happy.
But I'm a verbose sort of guy...

(Part of the reason for sending this is to test if my mail is going
through.  Looks like there's currently no route from fnal.gov to
python.org, I wonder where the problem is?)



From cgw at fnal.gov  Fri Sep  1 18:06:48 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 11:06:48 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <20000901114945.A15688@ludwig.cnri.reston.va.us>
References: <14766.50976.102853.695767@buffalo.fnal.gov>
	<Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
	<20000901114945.A15688@ludwig.cnri.reston.va.us>
Message-ID: <14767.54296.278370.953550@buffalo.fnal.gov>

Greg Ward wrote:

 > ...but the sound is horrible: various people opined on this list, many
 > months ago when I first reported the problem, that it's probably a
 > format problem.  (The wav/au mixup seems a likely candidate; it can't be
 > an endianness problem, since the .au file is 8-bit!)

Did you see the msg I sent yesterday?  (Maybe I send out too many mails)

I'm 99.9% sure it's a format problem, because if you replace
"audiotest.au" with some random ".wav" file, it works. (On my system
anyhow, with pretty generic cheapo soundblaster)

The code in test_linuxaudiodev.py has no chance of ever working
correctly, if you send mu-law encoded (i.e. logarithmic) data to a
device expecting linear, you will get noise.  You have to set the
format first. And, the functions in linuxaudiodev which are intended
to set the format don't work, and go against what is reccommended in
the OSS programming documentation.

IMHO this code is up for a complete rewrite, which I will submit post
2.0.  

The quick-and-dirty fix for the 2.0 release is to include
"audiotest.wav" and modify test_linuxaudiodev.au.


Ka-Ping Yee <ping at lfw.org> wrote:
> Are you talking about OSS vs. ALSA?  Didn't they at least try to
> keep some of the basic parts of the interface the same?

No, I'm talking about SoundBlaster8 vs. SoundBlaster16
vs. ProAudioSpectrum vs. Gravis vs. AdLib vs. TurtleBeach vs.... you
get the idea.  You can't know what formats are supported until you
probe the hardware.  Most of these cards *don't* handle logarithmic
data; and *then* depending on whether you have OSS or Alsa there may be
driver-side code to convert logarithmic data to linear before sending
it to the hardware.

The lowest-common-denominator, however, is raw 8-bit linear unsigned
data, which tends to be supported on all PC audio hardware.








From cgw at fnal.gov  Fri Sep  1 18:09:02 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 11:09:02 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.54177.584090.198596@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
Message-ID: <14767.54430.927663.710733@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 > Makes no difference:
 > 
 >     % ulimit -a
 >     stack size (kbytes)         unlimited
 >     % ./python Misc/find_recursionlimit.py
 >     Limit of 2400 is fine
 >     repr
 >     Segmentation fault
 > 
 > Skip

This means that you're not hitting the rlimit at all but getting a
real segfault!  Time to do setrlimit -c unlimited and break out GDB,
I'd say.



From cgw at fnal.gov  Fri Sep  1 01:01:22 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 18:01:22 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
References: <14766.50976.102853.695767@buffalo.fnal.gov>
	<Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
Message-ID: <14766.58306.977241.439169@buffalo.fnal.gov>

Ka-Ping Yee writes:

 > Side note: is there a well-defined platform-independent sound
 > interface we should be conforming to?  It would be nice to have a
 > single Python function for each of the following things:
 > 
 >     1. Play a .wav file given its filename.
 > 
 >     2. Play a .au file given its filename.

These may be possible.

 >     3. Play some raw audio data, given a string of bytes and a
 >        sampling rate.

This would never be possible unless you also specifed the format and
encoding of the raw data - are they 8bit, 16-bit, signed, unsigned,
bigendian, littlendian, linear, logarithmic ("mu_law"), etc?

Not only that, but some audio hardware will support some formats and
not others.  Some sound drivers will attempt to convert from a data
format which is not supported by the audio hardware to one which is,
and others will just reject the data if it's not in a format supported
by the hardware.  Trying to do anything with sound in a
platform-independent manner is near-impossible.  Even the same
"platform" (e.g. RedHat 6.2 on Intel) will behave differently
depending on what soundcard is installed.



From skip at mojam.com  Sat Sep  2 22:37:54 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 15:37:54 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14767.54430.927663.710733@buffalo.fnal.gov>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
	<14767.54430.927663.710733@buffalo.fnal.gov>
Message-ID: <14769.25890.529541.831812@beluga.mojam.com>

    >> % ulimit -a
    >> stack size (kbytes)         unlimited
    >> % ./python Misc/find_recursionlimit.py
    >> ...
    >> Limit of 2400 is fine
    >> repr
    >> Segmentation fault

    Charles> This means that you're not hitting the rlimit at all but
    Charles> getting a real segfault!  Time to do setrlimit -c unlimited and
    Charles> break out GDB, I'd say.

Running the program under gdb does no good.  It segfaults and winds up with
a corrupt stack as far as the debugger is concerned.  For some reason bash
won't let me set a core file size != 0 either:

    % ulimit -c
    0
    % ulimit -c unlimited
    % ulimit -c
    0

though I doubt letting the program dump core would be any better
debugging-wise than just running the interpreter under gdb's control.

Kinda weird.

Skip



From thomas at xs4all.net  Sat Sep  2 23:36:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 2 Sep 2000 23:36:47 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14767.54430.927663.710733@buffalo.fnal.gov>; from cgw@fnal.gov on Fri, Sep 01, 2000 at 11:09:02AM -0500
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net> <14766.53002.467504.523298@beluga.mojam.com> <14766.53381.634928.615048@buffalo.fnal.gov> <14766.54177.584090.198596@beluga.mojam.com> <14767.54430.927663.710733@buffalo.fnal.gov>
Message-ID: <20000902233647.Q12695@xs4all.nl>

On Fri, Sep 01, 2000 at 11:09:02AM -0500, Charles G Waldman wrote:
> Skip Montanaro writes:
>  > Makes no difference:

>  >     stack size (kbytes)         unlimited
>  >     % ./python Misc/find_recursionlimit.py
>  >     Limit of 2400 is fine
>  >     repr
>  >     Segmentation fault

> This means that you're not hitting the rlimit at all but getting a
> real segfault!  Time to do setrlimit -c unlimited and break out GDB,
> I'd say.

Yes, which I did (well, my girlfriend was hogging the PC with 'net
connection, and there was nothing but silly soft-porn on TV, so I spent an
hour or two on my laptop ;) and I did figure out the problem isn't
stackspace (which was already obvious) but *damned* if I know what the
problem is. 

Here's an easy way to step through the whole procedure, though. Take a
recursive script, like the one Guido posted:

    i = 0
    class C:
      def __getattr__(self, name):
          global i
          print i
          i += 1
          return self.name # common beginners' mistake

Run it once, so you get a ballpark figure on when it'll crash, and then
branch right before it would crash, calling some obscure function
(os.getpid() works nicely, very simple function.) This was about 2926 or so
on my laptop (adding the branch changed this number, oddly enough.)

    import os
    i = 0
    class C:
      def __getattr__(self, name):
          global i
          print i
          i += 1
          if (i > 2625):
              os.getpid()
          return self.name # common beginners' mistake

(I also moved the 'print i' to inside the branch, saved me a bit of
scrollin') Then start GDB on the python binary, set a breakpoint on
posix_getpid, and "run 'test.py'". You'll end up pretty close to where the
interpreter decides to go bellyup. Setting a breakpoint on ceval.c line 612
(the "opcode = NEXTOP();' line) or so at that point helps doing a
per-bytecode check, though this made me miss the actual point of failure,
and I don't fancy doing it again just yet :P What I did see, however, was
that the reason for the crash isn't the pure recursion. It looks like the
recursiveness *does* get caught properly, and the interpreter raises an
error. And then prints that error over and over again, probably once for
every call to getattr(), and eventually *that* crashes (but why, I don't
know. In one test I did, it crashed in int_print, the print function for int
objects, which did 'fprintf(fp, "%ld", v->ival);'. The actual SEGV arrived
inside fprintf's internals. v->ival was a valid integer (though a high one)
and the problem was not derefrencing 'v'. 'fp' was stderr, according to its
_fileno member.

'ltrace' (if you have it) is also a nice tool to let loose on this kind of
script, by the way, though it does make the test take a lot longer, and you
really need enough diskspace to store the output ;-P

Back-to-augassign-docs-ly y'rs,

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Sun Sep  3 00:06:41 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 3 Sep 2000 00:06:41 +0200 (CEST)
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEKDHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 02, 2000 02:20:18 PM
Message-ID: <200009022206.AAA02255@python.inrialpes.fr>

Tim Peters wrote:
>
> There's not going to be consensus on this, as the threshold is a crude 
> handle on a complex problem.  

Hehe. Tim gets philosophic again <wink>  

>
> In cases like this, the geometric mean of the extreme positions is 
> always the best guess <0.8 wink>:
> 
> >>> import math
> >>> math.sqrt(5000 * 100)
> 707.10678118654755
> >>>
>
> So 9 times out of 10 we can run it with a threshold of 707, and 1 out of 10 
> with 708 <wink>.
> 
> Tuning strategies for gc *can* get as complex as OS scheduling algorithms, 
> and for the same reasons:  you're in the business of predicting the future 
> based on just a few neurons keeping track of gross summaries of what 
> happened before. 
> ...
> [snip]

Right on target, Tim! It is well known that the recent past is the best 
approximation of the near future and that the past as a whole is the only
approximation we have at our disposal of the long-term future. If you add 
to that axioms like "memory management schemes influence the OS long-term 
scheduler", "the 50% rule applies for all allocation strategies", etc.,
it is clear that if we want to approach the optimum, we definitely need
to adjust the collection frequency according to some proportional scheme.

But even without saying this, your argument about dynamic GC thresholds
is enough to put Neil into a state of deep depression regarding the
current GC API <0.9 wink>.

Now let's be pragmatic: it is clear that the garbage collector will
make it for 2.0 -- be it enabled or disabled by default. So let's stick
to a compromise: 500 for the beta, 1000 for the final release. This
somewhat complies to your geometric calculus which mainly aims at
balancing the expressed opinions. It certainly isn't fond regarding
any existing theory or practice, and we all realized that despite the
impressive math.sqrt() <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From cgw at alum.mit.edu  Sun Sep  3 02:52:33 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sat, 2 Sep 2000 19:52:33 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions? 
In-Reply-To: <20000902233647.Q12695@xs4all.nl> 
References: <39AEC0F4.746656E2@per.dem.csiro.au> 
                <14766.50283.758598.632542@bitdiddle.concentric.net> 
                <14766.53002.467504.523298@beluga.mojam.com> 
                <14766.53381.634928.615048@buffalo.fnal.gov> 
                <14766.54177.584090.198596@beluga.mojam.com> 
                <14767.54430.927663.710733@buffalo.fnal.gov> 
                <20000902233647.Q12695@xs4all.nl> 
Message-ID: <14769.41169.108895.723628@sirius.net.home>

I said:
 > This means that you're not hitting the rlimit at all but getting a 
 > real segfault!  Time to do setrlimit -c unlimited and break out GDB, 
 > I'd say.   
 
Thomas Wouters came back with: 
> I did figure out the problem isn't stackspace (which was already
> obvious) but *damned* if I know what the problem is.  I don't fancy
> doing it again just yet :P:P What I did see, however, was that the
> reason for the crash isn't the pure recursion. It looks like the
> recursiveness *does* get caught properly, and the interpreter raises
> an error. And then prints that error over and over again, probably
> once for every call to getattr(), and eventually *that* crashes (but
> why, I don't know. In one test I did, it crashed in int_print, the
> print function for int objects, which did 'fprintf(fp, "%ld",
> v->ival);'. The actual SEGV arrived inside fprintf's
> internals. v->ival was a valid integer (though a high one) and the
> problem was not derefrencing 'v'. 'fp' was stderr, according to its
> _fileno member.
 
I've got some more info: this crash only happens if you have built
with --enable-threads.  This brings in a different (thread-safe)
version of fprintf, which uses mutex locks on file objects so output
from different threads doesn't get scrambled together.  And the SEGV
that I saw was happening exactly where fprintf is trying to unlock the
mutex on stderr, so it can print "Maximum recursion depth exceeded".
 
This looks like more ammo for Guido's theory that there's something 
wrong with libpthread on linux, and right now I'm elbows-deep in the 
guts of libpthread trying to find out more.  Fun little project for a
Saturday night ;-)      
 
> 'ltrace' (if you have it) is also a nice tool to let loose on this
> kind of script, by the way, though it does make the test take a lot
> longer, and you really need enough diskspace to store the output ;-P
 
Sure, I've got ltrace, and also more diskspace than you really want to 
know about!

Working-at-a-place-with-lots-of-machines-can-be-fun-ly yr's,
					
					-Charles
 




From m.favas at per.dem.csiro.au  Sun Sep  3 02:53:11 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sun, 03 Sep 2000 08:53:11 +0800
Subject: [Python-Dev] failure in test_sre???
Message-ID: <39B1A0F7.D8FF0076@per.dem.csiro.au>

Is it just me, or is test_sre meant to fail, following the recent
changes to _sre.c?

Short failure message:
test test_sre failed -- Writing: 'sre.match("\\x%02x" % i, chr(i)) !=
None', expected: ''

Full failure messages:
Running tests on character literals
sre.match("\x%02x" % i, chr(i)) != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape
sre.match("\x%02x0" % i, chr(i)+"0") != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape
sre.match("\x%02xz" % i, chr(i)+"z") != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape

(the above sequence is repeated another 7 times) 

-- 
Mark



From m.favas at per.dem.csiro.au  Sun Sep  3 04:05:03 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sun, 03 Sep 2000 10:05:03 +0800
Subject: [Python-Dev] Namespace collision between lib/xml and 
 site-packages/xml
References: <200009010400.XAA30273@cj20424-a.reston1.va.home.com>
Message-ID: <39B1B1CF.572955FC@per.dem.csiro.au>

Guido van Rossum wrote:
> 
> You might be able to get the old XML-sig code to override the core xml
> package by creating a symlink named _xmlplus to it in site-packages
> though.

Nope - doing this allows the imports to succeed where before they were
failing, but I get a "SAXException: No parsers found" failure now. No
big deal - I'll probably rename the xml-sig stuff and include it in my
app.

-- 
Mark



From tim_one at email.msn.com  Sun Sep  3 05:18:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 23:18:31 -0400
Subject: [Python-Dev] failure in test_sre???
In-Reply-To: <39B1A0F7.D8FF0076@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELCHDAA.tim_one@email.msn.com>

[Mark Favas, on new test_sre failures]
> Is it just me, or is test_sre meant to fail, following the recent
> changes to _sre.c?

Checkins are never supposed to leave the test suite in a failing state, but
while that's "the rule" it's still too rarely the reality (although *much*
better than it was just a month ago -- whining works <wink>).  Offhand these
look like shallow new failures to me, related to /F's so-far partial
implemention of PEP 223 (Change the Meaning of \x Escapes).  I'll dig into a
little more.  Rest assured it will get fixed before the 2.0b1 release!

> Short failure message:
> test test_sre failed -- Writing: 'sre.match("\\x%02x" % i, chr(i)) !=
> None', expected: ''
>
> Full failure messages:
> Running tests on character literals
> sre.match("\x%02x" % i, chr(i)) != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
> sre.match("\x%02x0" % i, chr(i)+"0") != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
> sre.match("\x%02xz" % i, chr(i)+"z") != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
>
> (the above sequence is repeated another 7 times)
>
> --
> Mark
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev





From skip at mojam.com  Sun Sep  3 06:25:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 23:25:49 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000902233647.Q12695@xs4all.nl>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
	<14767.54430.927663.710733@buffalo.fnal.gov>
	<20000902233647.Q12695@xs4all.nl>
Message-ID: <14769.53966.93066.283106@beluga.mojam.com>

    Thomas> In one test I did, it crashed in int_print, the print function
    Thomas> for int objects, which did 'fprintf(fp, "%ld", v->ival);'.  The
    Thomas> actual SEGV arrived inside fprintf's internals. v->ival was a
    Thomas> valid integer (though a high one) and the problem was not
    Thomas> derefrencing 'v'. 'fp' was stderr, according to its _fileno
    Thomas> member.

I get something similar.  The script conks out after 4491 calls (this with a
threaded interpreter).  It segfaults in _IO_vfprintf trying to print 4492 to
stdout.  All arguments to _IO_vfprintf appear valid (though I'm not quite
sure how to print the third, va_list, argument).

When I configure --without-threads, the script runs much longer, making it
past 18068.  It conks out in the same spot, however, trying to print 18069.
The fact that it occurs in the same place with and without threads (the
addresses of the two different _IO_vfprintf functions are different, which
implies different stdio libraries are active in the threading and
non-threading versions as Thomas said), suggests to me that the problem may
simply be that in the threading case each thread (even the main thread) is
limited to a much smaller stack.  Perhaps I'm seeing what I'm supposed to
see.  If the two versions were to crap out for different reasons, I doubt
I'd see them failing in the same place.

Skip





From cgw at fnal.gov  Sun Sep  3 07:34:24 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Sun, 3 Sep 2000 00:34:24 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14769.53966.93066.283106@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
	<14767.54430.927663.710733@buffalo.fnal.gov>
	<20000902233647.Q12695@xs4all.nl>
	<14769.53966.93066.283106@beluga.mojam.com>
Message-ID: <14769.58081.532.747747@buffalo.fnal.gov>

Skip Montanaro writes:

 > When I configure --without-threads, the script runs much longer, making it
 > past 18068.  It conks out in the same spot, however, trying to print 18069.
 > The fact that it occurs in the same place with and without threads (the
 > addresses of the two different _IO_vfprintf functions are different, which
 > implies different stdio libraries are active in the threading and
 > non-threading versions as Thomas said), suggests to me that the problem may
 > simply be that in the threading case each thread (even the main thread) is
 > limited to a much smaller stack.  Perhaps I'm seeing what I'm supposed to
 > see.  If the two versions were to crap out for different reasons, I doubt
 > I'd see them failing in the same place.

Yes, libpthread defines it's own version of _IO_vprintf. 

Try this experiment:  do a "ulimit -a" to see what the stack size
limit is; start your Python process; find it's PID, and before you
start your test, go into another window and run the command
watch -n 0 "grep Stk /proc/<pythonpid>/status"

This will show exactly how much stack Python is using.  Then start the
runaway-recursion test.  If it craps out when the stack usage hits the
rlimit, you are seeing what you are supposed to see.  If it craps out
anytime sooner, there is a real bug of some sort, as I'm 99% sure
there is.



From thomas at xs4all.net  Sun Sep  3 09:44:51 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 3 Sep 2000 09:44:51 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14769.41169.108895.723628@sirius.net.home>; from cgw@alum.mit.edu on Sat, Sep 02, 2000 at 07:52:33PM -0500
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net> <14766.53002.467504.523298@beluga.mojam.com> <14766.53381.634928.615048@buffalo.fnal.gov> <14766.54177.584090.198596@beluga.mojam.com> <14767.54430.927663.710733@buffalo.fnal.gov> <20000902233647.Q12695@xs4all.nl> <14769.41169.108895.723628@sirius.net.home>
Message-ID: <20000903094451.R12695@xs4all.nl>

On Sat, Sep 02, 2000 at 07:52:33PM -0500, Charles G Waldman wrote:

> This looks like more ammo for Guido's theory that there's something 
> wrong with libpthread on linux, and right now I'm elbows-deep in the 
> guts of libpthread trying to find out more.  Fun little project for a
> Saturday night ;-)      

I concur that it's probably not Python-related, even if it's probably
Python-triggered (and possibly Python-induced, because of some setting or
other) -- but I think it would be very nice to work around it! And we have
roughly the same recursion limit for BSDI with a 2Mbyte stack limit, so lets
not adjust that guestimate just yet.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Sun Sep  3 10:25:38 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 04:25:38 -0400
Subject: [Python-Dev] failure in test_sre???
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELCHDAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIELOHDAA.tim_one@email.msn.com>

> [Mark Favas, on new test_sre failures]
> > Is it just me, or is test_sre meant to fail, following the recent
> > changes to _sre.c?

I just checked in a fix for this.  /F also implemented PEP 223, and it had a
surprising consequece for test_sre!  There were three test lines (in a loop,
that's why you got so many failures) of the form:

    test(r"""sre.match("\x%02x" % i, chr(i)) != None""", 1)

Note the

    "\x%02x"

part.  Before PEP 223, that "expanded" to itself:

    "\x%02x"

because the damaged \x escape was ignored.  After PEP223, it raised the

    ValueError: invalid \x escape

you kept seeing.  The fix was merely to change these 3 lines to use, e.g.,

    r"\x%02x"

instead.  Pattern strings should usually be r-strings anyway.





From Vladimir.Marangozov at inrialpes.fr  Sun Sep  3 11:21:42 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 3 Sep 2000 11:21:42 +0200 (CEST)
Subject: [Python-Dev] Copyright gag
Message-ID: <200009030921.LAA08963@python.inrialpes.fr>

Even CVS got confused about the Python's copyright <wink>

~> cvs update
...
cvs server: Updating Demo/zlib
cvs server: Updating Doc
cvs server: nothing known about Doc/COPYRIGHT
cvs server: Updating Doc/api
cvs server: Updating Doc/dist
...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Sun Sep  3 12:10:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 3 Sep 2000 12:10:01 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src LICENSE,1.1.2.7,1.1.2.8
References: <200009030228.TAA12677@slayer.i.sourceforge.net>
Message-ID: <00a501c0158f$25a5bfa0$766940d5@hagrid>

guido wrote:
> Modified Files:
>       Tag: cnri-16-start
> LICENSE 
> Log Message:
> Set a release date, now that there's agreement between
> CNRI and the FSF.

and then he wrote:

> Modified Files:
> LICENSE 
> Log Message:
> Various edits.  Most importantly, added dual licensing.  Also some
> changes suggested by BobW.

where "dual licensing" means:

    ! 3. Instead of using this License, you can redistribute and/or modify
    ! the Software under the terms of the GNU General Public License as
    ! published by the Free Software Foundation; either version 2, or (at
    ! your option) any later version.  For a copy of the GPL, see
    ! http://www.gnu.org/copyleft/gpl.html.
  
what's going on here?  what exactly does the "agreement" mean?

(I can guess, but my guess doesn't make me happy. I didn't really
think I would end up in a situation where people can take code I've
written, make minor modifications to it, and re-release it in source
form in a way that makes it impossible for me to use it...)

</F>




From guido at beopen.com  Sun Sep  3 16:03:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 09:03:46 -0500
Subject: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: Your message of "Sun, 03 Sep 2000 12:09:12 +0200."
             <00a401c0158f$24dc5520$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com>  
            <00a401c0158f$24dc5520$766940d5@hagrid> 
Message-ID: <200009031403.JAA11856@cj20424-a.reston1.va.home.com>

> bob weiner wrote:    
> > We are doing a lot of work at BeOpen with CNRI to get them to allow
> > the GPL as an alternative license across the CNRI-derived parts of the
> > codebase.  /.../  We at BeOpen want GPL-compatibility and have pushed
> > for that since we started with any Python licensing issues.

Fredrik Lundh replied:
> my understanding was that the consortium members agreed
> that GPL-compatibility was important, but that it didn't mean
> that a licensing Python under GPL was a good thing.
> 
> was dual licensing discussed on the consortium meeting?

Can't remember, probably was mentioned as one of the considered
options.  Certainly the consortium members present at the meeting in
Monterey agreed that GPL compatibility was important.

> is the consortium (and this mailing list) irrelevant in this
> discussion?

You posted a +0 for dual licensing if it was the *only* possibility to
reach GPL-compatibility for future Python licenses.  That's also my
own stance on this.

I don't believe I received any other relevant feedback.  I did see
several posts from consortium members Paul Everitt and Jim Ahlstrom,
defending the choice of law clause in the CNRI license and explaining
why the GPL is not a gret license and why a pure GPL license is
unacceptable for Python; I take these very seriously.

Bob Weiner and I talked for hours with Kahn on Friday night and
Saturday; I talked to Stallman several times on Saturday; Kahn and
Stallman talked on Saturday.  Dual licensing really was the *only* way
to reach an agreement.  So I saw no way out of the impasse except to
just do it and get it over with.

Kahn insisted that 1.6final be released before 2.0b1 and 2.0b1 be made
a derived work of 1.6final.  To show that he was serious, he shut off
our login access to python.org and threatened with legal action if we
would proceed with the 2.0b1 release as a derived work of 1.6b1.  I
don't understand why this is so important to him, but it clearly is.
I want 2.0b1 to be released (don't you?) so I put an extra effort in
to round up Stallman and make sure he and Kahn got on the phone to get
a resolution, and for a blissful few hours I believed it was all done.

Unfortunately the fat lady hasn't sung yet.

After we thought we had reached agreement, Stallman realized that
there are two interpretations of what will happen next:

    1. BeOpen releases a version for which the license is, purely and
    simply, the GPL.

    2. BeOpen releases a version which states the GPL as the license,
    and also states the CNRI license as applying with its text to part
    of the code.

His understanding of the agreement (and that of his attorney, Eben
Moglen, a law professor at NYU) was based on #1.  It appears that what
CNRI will explicitly allow BeOpen (and what the 1.6 license already
allows) is #2.  Stallman will have to get Moglen's opinion, which may
take weeks.  It's possible that they think that the BeOpen license is
still incompatible with the GPL.  In that case (assuming it happens
within a reasonable time frame, and not e.g. 5 years from now :-) we
have Kahn's agreement to go back to the negotiation table and talk to
Stallman about possible modifications to the CNRI license.  If the
license changes, we'll re-release Python 1.6 as 1.6.1 with the new
license, and we'll use that for BeOpen releases.  If dual-licensing is
no longer needed at that point I'm for taking it out again.

> > > BTW, anybody got a word from RMS on whether the "choice of law"
> > > is really the only one bugging him ?
> >
> > Yes, he has told me that was the only remaining issue.
> 
> what's the current status here?  Guido just checked in a new
> 2.0 license that doesn't match the text he posted here a few
> days ago.  Most notable, the new license says:
> 
>     3. Instead of using this License, you can redistribute and/or modify
>     the Software under the terms of the GNU General Public License as
>     published by the Free Software Foundation; either version 2, or (at
>     your option) any later version.  For a copy of the GPL, see
>     http://www.gnu.org/copyleft/gpl.html.
> 
> on the other hand, another checkin message mentions agreement
> between CNRI and the FSF.  did they agree to disagree?

I think I've explained most of this above.  I don't recall that
checkin message.  Which file?  I checked the cvs logs for README and
LICENSE for both the 1.6 and 2.0 branch.

Anyway, the status is that 1.6 final is incompatible with the GPL and
that for 2.0b1 we may or may not have GPL compatibility based on the
dual licensing clause.

I'm not too happy with the final wart.  We could do the following:
take the dual licensing clause out of 2.0b1, and promise to put it
back into 2.0final if it is still needed.  After all, it's only a
beta, and we don't *want* Debian to put 2.0b1 in their distribution,
do we?  But personally I'm of an optimistic nature; I still hope that
Moglen will find this solution acceptable and that this will be the
end of the story.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Sun Sep  3 15:36:52 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 3 Sep 2000 15:36:52 +0200
Subject: [Python-Dev] Re: Conflict with the GPL
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com>              <00a401c0158f$24dc5520$766940d5@hagrid>  <200009031403.JAA11856@cj20424-a.reston1.va.home.com>
Message-ID: <005a01c015ac$079f1c00$766940d5@hagrid>

guido wrote:

> I want 2.0b1 to be released (don't you?) so I put an extra effort in
> to round up Stallman and make sure he and Kahn got on the phone to get
> a resolution, and for a blissful few hours I believed it was all done.

well, after reading the rest of your mail, I'm not so
sure...

> After we thought we had reached agreement, Stallman realized that
> there are two interpretations of what will happen next:
> 
>     1. BeOpen releases a version for which the license is, purely and
>     simply, the GPL.
> 
>     2. BeOpen releases a version which states the GPL as the license,
>     and also states the CNRI license as applying with its text to part
>     of the code.

"to part of the code"?

are you saying the 1.6 will be the last version that is
truly free for commercial use???

what parts would be GPL-only?

</F>




From guido at beopen.com  Sun Sep  3 16:35:31 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 09:35:31 -0500
Subject: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: Your message of "Sun, 03 Sep 2000 15:36:52 +0200."
             <005a01c015ac$079f1c00$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com> <00a401c0158f$24dc5520$766940d5@hagrid> <200009031403.JAA11856@cj20424-a.reston1.va.home.com>  
            <005a01c015ac$079f1c00$766940d5@hagrid> 
Message-ID: <200009031435.JAA12281@cj20424-a.reston1.va.home.com>

> guido wrote:
> 
> > I want 2.0b1 to be released (don't you?) so I put an extra effort in
> > to round up Stallman and make sure he and Kahn got on the phone to get
> > a resolution, and for a blissful few hours I believed it was all done.
> 
> well, after reading the rest of your mail, I'm not so
> sure...

Agreed. :-(

> > After we thought we had reached agreement, Stallman realized that
> > there are two interpretations of what will happen next:
> > 
> >     1. BeOpen releases a version for which the license is, purely and
> >     simply, the GPL.
> > 
> >     2. BeOpen releases a version which states the GPL as the license,
> >     and also states the CNRI license as applying with its text to part
> >     of the code.
> 
> "to part of the code"?
> 
> are you saying the 1.6 will be the last version that is
> truly free for commercial use???
> 
> what parts would be GPL-only?

Aaaaargh!  Please don't misunderstand me!  No part of Python will be
GPL-only!  At best we'll dual license.

This was quoted directly from Stallman's mail about this issue.  *He*
doesn't care about the other half of the dual license, so he doens't
mention it.

Sorry!!!!!!!!!!!!!!!!!!!!!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sun Sep  3 17:18:07 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 10:18:07 -0500
Subject: [Python-Dev] New commands to display license, credits, copyright info
Message-ID: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>

The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
with an "All Rights Reserved" for each -- according to CNRI's wishes).

This will cause a lot of scrolling at the start of a session.

Does anyone care?

Bob Weiner (my boss at BeOpen) suggested that we could add commands
to display such information instead.  Here's a typical suggestion with
his idea implemented:

    Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "license" or "credits" for this information.
    >>> copyright
    Copyright (c) 2000 BeOpen.com; All Rights Reserved.
    Copyright (c) 1995-2000 Corporation for National Research Initiatives;
    All Rights Reserved.
    Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
    All Rights Reserved.

    >>> credits
    A BeOpen PythonLabs-led production.

    >>> license
    HISTORY OF THE SOFTWARE
    =======================

    Python was created in the early 1990s by Guido van Rossum at Stichting
    Mathematisch Centrum (CWI) in the Netherlands as a successor of a
    language called ABC.  Guido is Python's principal author, although it
        .
        .(etc)
        .
    Hit Return for more, or q (and Return) to quit: q

    >>>

How would people like this?  (The blank line before the prompt is
unavoidable due to the mechanics of how objects are printed.)

Any suggestions for what should go in the "credits" command?

(I considered taking the detailed (messy!) GCC version info out as
well, but decided against it.  There's a bit of a tradition in bug
reports to quote the interpreter header and showing the bug in a
sample session; the compiler version is often relevant.  Expecting
that bug reporters will include this information manually won't work.
Instead, I broke it up in two lines.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at alum.mit.edu  Sun Sep  3 17:53:08 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 10:53:08 -0500 (CDT)
Subject: [Python-Dev] New commands to display licence, credits, copyright info
Message-ID: <14770.29668.639079.511087@sirius.net.home>

I like Bob W's suggestion a lot.  It is more open-ended and scalable
than just continuing to add more and more lines to the startup
messages.  I assume these commands would only be in effect in
interactive mode, right?

You could also maybe add a "help" command, which, if nothing else,
could get people pointed at the online tutorial/manuals.

And, by all means, please keep the compiler version in the startup
message!



From guido at beopen.com  Sun Sep  3 18:59:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 11:59:55 -0500
Subject: [Python-Dev] New commands to display licence, credits, copyright info
In-Reply-To: Your message of "Sun, 03 Sep 2000 10:53:08 EST."
             <14770.29668.639079.511087@sirius.net.home> 
References: <14770.29668.639079.511087@sirius.net.home> 
Message-ID: <200009031659.LAA14864@cj20424-a.reston1.va.home.com>

> I like Bob W's suggestion a lot.  It is more open-ended and scalable
> than just continuing to add more and more lines to the startup
> messages.  I assume these commands would only be in effect in
> interactive mode, right?

Actually, for the benefit of tools like IDLE (which have an
interactive read-eval-print loop but don't appear to be interactive
during initialization), they are always added.  They are implemented
as funny builtins, whose repr() prints the info and then returns "".

> You could also maybe add a "help" command, which, if nothing else,
> could get people pointed at the online tutorial/manuals.

Sure -- and "doc".  Later, after 2.0b1.

> And, by all means, please keep the compiler version in the startup
> message!

Will do.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at alum.mit.edu  Sun Sep  3 18:02:09 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 11:02:09 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix, etc
Message-ID: <14770.30209.733300.519614@sirius.net.home>

Skip Montanaro write:

> When I configure --without-threads, the script runs much longer,
> making it past 18068.  It conks out in the same spot, however,
> trying to print 18069.

I am utterly unable to reproduce this.  With "ulimit -s unlimited" and
a no-threads version of Python, "find_recursionlimit" ran overnight on
my system and got up to a recursion depth of 98,400 before I killed it
off.  It was using 74MB of stack space at this point, and my system
was running *really* slow (probably because my pathetic little home
system only has 64MB of physical memory!).

Are you absolutely sure that when you built your non-threaded Python
you did a thorough housecleaning, eg. "make clobber"?  Sometimes I get
paranoid and type "make distclean", just to be sure - but this
shouldn't be necessary, right?

Can you give me more info about your system?  I'm at kernel 2.2.16,
gcc 2.95.2 and glibc-2.1.3.  How about you?

I've got to know what's going on here, because your experimental
results don't conform to my theory, and I'd rather change your results
than have to change my theory <wink>

     quizzically yr's,

		  -C







From tim_one at email.msn.com  Sun Sep  3 19:17:34 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 13:17:34 -0400
Subject: [License-py20] Re: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: <005a01c015ac$079f1c00$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMJHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> are you saying the 1.6 will be the last version that is
> truly free for commercial use???

If this is a serious question, it disturbs me, because it would demonstrate
a massive meltdown in trust between the community and BeOpen PythonLabs.

If we were willing to screw *any* of Python's

   + Commercial users.
   + Open Source users.
   + GPL users.

we would have given up a month ago (when we first tried to release 2b1 with
a BSD-style license but got blocked).  Unfortunately, the only power we have
in this now is the power to withhold release until the other parties (CNRI
and FSF) agree on a license they can live with too.  If the community thinks
Guido would sell out Python's commercial users to get the FSF's blessing,
*or vice versa*, maybe we should just give up on the basis that we've lost
peoples' trust anyway.  Delaying the releases time after time sure isn't
helping BeOpen's bottom line.





From tim_one at email.msn.com  Sun Sep  3 19:43:15 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 13:43:15 -0400
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMKHDAA.tim_one@email.msn.com>

[Guido]
> The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
> with an "All Rights Reserved" for each -- according to CNRI's wishes).
>
> This will cause a lot of scrolling at the start of a session.
>
> Does anyone care?

I personally hate it:

C:\Code\python\dist\src\PCbuild>python
Python 2.0b1 (#0, Sep  3 2000, 00:31:47) [MSC 32 bit (Intel)] on win32
Copyright (c) 2000 BeOpen.com; All Rights Reserved.
Copyright (c) 1995-2000 Corporation for National Research Initiatives;
All Rights Reserved.
Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
All Rights Reserved.
>>>

Besides being plain ugly, under Win9x DOS boxes are limited to a max height
of 50 lines, and that's also the max buffer size.  This mass of useless
verbiage (I'm still a programmer 20 minutes of each day <0.7 wink>) has
already interfered with my ability to test the Windows version of Python
(half the old build's stuff I wanted to compare the new build's behavior
with scrolled off the screen the instant I started the new build!).

> Bob Weiner (my boss at BeOpen) suggested that we could add commands
> to display such information instead.  Here's a typical suggestion with
> his idea implemented:
>
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03)
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
>     >>> ...

Much better.

+1.





From tim_one at email.msn.com  Sun Sep  3 21:59:36 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 15:59:36 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src LICENSE,1.1.2.7,1.1.2.8
In-Reply-To: <00a501c0158f$25a5bfa0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMPHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> I didn't really think I would end up in a situation where people
> can take code I've written, make minor modifications to it, and re-
> release it in source form in a way that makes it impossible for me
> to use it...)

People have *always* been able to do that, /F.  The CWI license was
GPL-compatible (according to RMS), so anyone all along has been able to take
the Python distribution in whole or in part and re-release it under the
GPL -- or even more restrictive licenses than that.  Heck, they don't even
have to reveal their modifications to your code if they don't feel like it
(although they would have to under the GPL).

So there's nothing new here.  In practice, I don't think anyone yet has felt
abused (well, not by *this* <wink>).





From tim_one at email.msn.com  Sun Sep  3 22:22:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 16:22:43 -0400
Subject: [Python-Dev] Copyright gag
In-Reply-To: <200009030921.LAA08963@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENBHDAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> Sent: Sunday, September 03, 2000 5:22 AM
> To: Python core developers
> Subject: [Python-Dev] Copyright gag
>
> Even CVS got confused about the Python's copyright <wink>
>
> ~> cvs update
> ...
> cvs server: Updating Demo/zlib
> cvs server: Updating Doc
> cvs server: nothing known about Doc/COPYRIGHT
> cvs server: Updating Doc/api
> cvs server: Updating Doc/dist
> ...

Yes, we're all seeing that.  I filed a bug report on it with SourceForge; no
resolution yet; we can't get at the CVS files directly (for "security
reasons"), so they'll have to find the damage & fix it themselves.






From trentm at ActiveState.com  Sun Sep  3 23:10:43 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 3 Sep 2000 14:10:43 -0700
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Sep 03, 2000 at 10:18:07AM -0500
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <20000903141043.B28584@ActiveState.com>

On Sun, Sep 03, 2000 at 10:18:07AM -0500, Guido van Rossum wrote:
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2

Yes, I like getting rid of the copyright verbosity.

>     Type "copyright", "license" or "credits" for this information.
>     >>> copyright
>     >>> credits
>     >>> license
>     >>>

... but do we need these. Canwe not just add a -V or --version or
--copyright, etc switches. Not a big deal, though.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From nascheme at enme.ucalgary.ca  Mon Sep  4 01:28:04 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sun, 3 Sep 2000 17:28:04 -0600
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Sun, Sep 03, 2000 at 10:18:07AM -0500
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <20000903172804.A20336@keymaster.enme.ucalgary.ca>

On Sun, Sep 03, 2000 at 10:18:07AM -0500, Guido van Rossum wrote:
> Does anyone care?

Yes.  Athough not too much.

> Bob Weiner (my boss at BeOpen) suggested that we could add commands
> to display such information instead.

Much nicer except for one nit.

>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
                                                   ^^^^

For what information?

  Neil



From jeremy at beopen.com  Mon Sep  4 01:59:12 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Sun, 3 Sep 2000 19:59:12 -0400 (EDT)
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <20000903172804.A20336@keymaster.enme.ucalgary.ca>
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
	<20000903172804.A20336@keymaster.enme.ucalgary.ca>
Message-ID: <14770.58832.801784.267646@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

  >> Python 2.0b1 (#134, Sep 3 2000, 10:04:03) 
  >> [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 
  >> Type "copyright", "license" or "credits" for this information.
  NS>                                             ^^^^
  NS> For what information?

I think this is a one-line version of 'Type "copyright" for copyright
information, "license" for license information, or "credits" for
credits information.'

I think the meaning is clear if the phrasing is awkward.  Would 'that'
be any better than 'this'?

Jeremy



From root at buffalo.fnal.gov  Mon Sep  4 02:00:00 2000
From: root at buffalo.fnal.gov (root)
Date: Sun, 3 Sep 2000 19:00:00 -0500
Subject: [Python-Dev] New commands to display license, credits, copyright info
Message-ID: <200009040000.TAA19857@buffalo.fnal.gov>

Jeremy wrote:

 > I think the meaning is clear if the phrasing is awkward.  Would 'that'
 > be any better than 'this'?

To my ears, "that" is just as awkward as "this".  But in this context,
I think "more" gets the point across and sounds much more natural.




From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 02:07:03 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 02:07:03 +0200 (CEST)
Subject: [Python-Dev] libdb on by default, but no db.h
Message-ID: <200009040007.CAA14488@python.inrialpes.fr>

On my AIX combo, configure assumes --with-libdb (yes) but reports that

...
checking for db_185.h... no
checking for db.h... no
...

This leaves the bsddbmodule enabled but it can't compile, obviously.
So this needs to be fixed ASAP.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 03:16:20 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 03:16:20 +0200 (CEST)
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 03, 2000 10:18:07 AM
Message-ID: <200009040116.DAA14774@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
> with an "All Rights Reserved" for each -- according to CNRI's wishes).
> 
> This will cause a lot of scrolling at the start of a session.
> 
> Does anyone care?

Not much, but this is annoying information anyway :-)

> 
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
>     >>> copyright
>     Copyright (c) 2000 BeOpen.com; All Rights Reserved.
>     Copyright (c) 1995-2000 Corporation for National Research Initiatives;
>     All Rights Reserved.
>     Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
>     All Rights Reserved.

A semicolon before "All rights reserved" is ugly. IMO, it should be a period.
All rights reserved probably needs to go to a new line for the three
copyright holders. Additionally, they can be seperated by a blank line
for readability.

Otherwise, I like the proposed "type ... for more information".

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Mon Sep  4 03:10:26 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 20:10:26 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix, etc
In-Reply-To: <14770.30209.733300.519614@sirius.net.home>
References: <14770.30209.733300.519614@sirius.net.home>
Message-ID: <14770.63106.529258.156519@beluga.mojam.com>

    Charles> I am utterly unable to reproduce this.  With "ulimit -s
    Charles> unlimited" and a no-threads version of Python,
    Charles> "find_recursionlimit" ran overnight on my system and got up to
    Charles> a recursion depth of 98,400 before I killed it off.

Mea culpa.  It seems I forgot the "ulimit -s unlimited" command.  Keep your
theory, but get a little more memory.  It only took me a few seconds to
exceed a recursion depth of 100,000 after properly setting the stack size
limit... ;-)

Skip






From cgw at alum.mit.edu  Mon Sep  4 04:33:24 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 21:33:24 -0500
Subject: [Python-Dev] Thread problems on Linux
Message-ID: <200009040233.VAA27866@sirius>

No, I still don't have the answer, but I came across a very interesting
bit in the `info' files for glibc-2.1.3.  Under a heading "Specific Advice
for Linux Systems", along with a bunch of info about installing glibc,
is this gem:

 >    You cannot use `nscd' with 2.0 kernels, due to bugs in the
 > kernel-side thread support.  `nscd' happens to hit these bugs
 > particularly hard, but you might have problems with any threaded
 > program.

Now, they are talking about 2.0 and I assume everyone here running Linux
is running 2.2.  However it makes one wonder whether all the bugs in
kernel-side thread support are really fixed in 2.2.  One of these days
we'll figure it out...




From tim_one at email.msn.com  Mon Sep  4 04:44:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 22:44:28 -0400
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <200009040233.VAA27866@sirius>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>

Did we ever get a little "pure C" program that illustrates the mystery here?
That's probably still the only way to get a Linux guru interested, and also
the best way to know whether the problem is fixed in a future release (i.e.,
by running the sucker and seeing whether it still misbehaves).

I could believe, e.g., that they fixed pthread locks fine, but that there's
still a subtle problem with pthread condition vrbls.  To the extent Jeremy's
stacktraces made any sense, they showed insane condvar symptoms (a parent
doing a pthread_cond_wait yet chewing cycles at a furious pace).





From tim_one at email.msn.com  Mon Sep  4 05:11:09 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 23:11:09 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <007901c014c0$852eff60$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEOCHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> just fyi, Tkinter seems to be extremely unstable on Win95 and
> Win98FE (when shut down, the python process grabs the key-
> board and hangs.  the only way to kill the process is to reboot)
>
> the same version of Tk (wish) works just fine...

So what can we do about this?  I'm wary about two things:

1. Thomas reported one instance of Win98FE rot, of a kind that simply
   plagues Windows for any number of reasons.  He wasn't able to
   reproduce it.  So while I've noted his report, I'm giving it little
   weight so far.

2. I never use Tkinter, except indirectly for IDLE.  I've been in and
   out of 2b1 IDLE on Win98SE all day and haven't seen a hint of trouble.

   But you're a Tkinter power user of the highest order.  So one thing
   I'm wary of is that you may have magical Tcl/Tk envars (or God only
   knows what else) set up to deal with the multiple copies of Tcl/Tk
   I'm betting you have on your machine.  In fact, I *know* you have
   multiple Tcl/Tks sitting around because of your wish comment:
   the Python installer no longer installs wish, so you got that from
   somewhere else.  Are you positive you're not mixing versions
   somehow?  If anyone could mix them in a way we can't stop, it's
   you <wink>.

If anyone else is having Tkinter problems, they haven't reported them.
Although I doubt few have tried it!

In the absence of more helpers, can you pass on a specific (small if
possible) program that exhibits the "hang" problem?  And by "extremely
unstable", do you mean that there are many strange problems, or is the "hang
on exit" problem the only one?

Thanks in advance!

beleagueredly y'rs  - tim





From skip at mojam.com  Mon Sep  4 05:12:06 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 22:12:06 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040007.CAA14488@python.inrialpes.fr>
References: <200009040007.CAA14488@python.inrialpes.fr>
Message-ID: <14771.4870.954882.513141@beluga.mojam.com>


    Vlad> On my AIX combo, configure assumes --with-libdb (yes) but reports
    Vlad> that

    Vlad> ...
    Vlad> checking for db_185.h... no
    Vlad> checking for db.h... no
    Vlad> ...

    Vlad> This leaves the bsddbmodule enabled but it can't compile,
    Vlad> obviously.  So this needs to be fixed ASAP.

Oops.  Please try the attached patch and let me know it it runs better.
(Don't forget to run autoconf.)  Besides fixing the problem you
reported, it tells users why bsddb was not supported if they asked for it
but it was not enabled.

Skip

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: configure.in.patch
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000903/2ed2fa8b/attachment.txt>

From greg at cosc.canterbury.ac.nz  Mon Sep  4 05:21:14 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 04 Sep 2000 15:21:14 +1200 (NZST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021407.QAA29710@python.inrialpes.fr>
Message-ID: <200009040321.PAA18947@s454.cosc.canterbury.ac.nz>

Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov):

> The point is that we have two types of garbage: collectable and
> uncollectable.

I don't think these are the right terms. The collector can
collect the "uncollectable" garbage all right -- what it can't
do is *dispose* of it. So it should be called "undisposable"
or "unrecyclable" or "undigestable" something.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 05:51:31 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 05:51:31 +0200 (CEST)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <14771.4870.954882.513141@beluga.mojam.com> from "Skip Montanaro" at Sep 03, 2000 10:12:06 PM
Message-ID: <200009040351.FAA19784@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> Oops.  Please try the attached patch and let me know it it runs better.

Runs fine. Thanks!

After looking again at Modules/Setup.config, I wonder whether it would
be handy to add a configure option --with-shared (or similar) which would
uncomment #*shared* there and in Setup automatically (in line with the
other recent niceties like --with-pydebug).

Uncommenting them manually in two files now is a pain... :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Mon Sep  4 06:06:40 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 23:06:40 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040351.FAA19784@python.inrialpes.fr>
References: <14771.4870.954882.513141@beluga.mojam.com>
	<200009040351.FAA19784@python.inrialpes.fr>
Message-ID: <14771.8144.959081.410574@beluga.mojam.com>

    Vlad> After looking again at Modules/Setup.config, I wonder whether it
    Vlad> would be handy to add a configure option --with-shared (or
    Vlad> similar) which would uncomment #*shared* there and in Setup
    Vlad> automatically (in line with the other recent niceties like
    Vlad> --with-pydebug).

    Vlad> Uncommenting them manually in two files now is a pain... :-)

Agreed.  I'll submit a patch.

Skip



From skip at mojam.com  Mon Sep  4 06:16:52 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 23:16:52 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040351.FAA19784@python.inrialpes.fr>
References: <14771.4870.954882.513141@beluga.mojam.com>
	<200009040351.FAA19784@python.inrialpes.fr>
Message-ID: <14771.8756.760841.38442@beluga.mojam.com>

    Vlad> After looking again at Modules/Setup.config, I wonder whether it
    Vlad> would be handy to add a configure option --with-shared (or
    Vlad> similar) which would uncomment #*shared* there and in Setup
    Vlad> automatically (in line with the other recent niceties like
    Vlad> --with-pydebug).

On second thought, I think this is not a good idea right now because
Modules/Setup is not usually fiddled by the configure step.  If "#*shared*"
existed in Modules/Setup and the user executed "./configure --with-shared",
they'd be disappointed that the modules declared in Modules/Setup following
that line weren't built as shared objects.

Skip




From greg at cosc.canterbury.ac.nz  Mon Sep  4 06:34:02 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 04 Sep 2000 16:34:02 +1200 (NZST)
Subject: [Python-Dev] New commands to display license, credits,
 copyright info
In-Reply-To: <14770.58832.801784.267646@bitdiddle.concentric.net>
Message-ID: <200009040434.QAA18957@s454.cosc.canterbury.ac.nz>

Jeremy Hylton <jeremy at beopen.com>:

> I think the meaning is clear if the phrasing is awkward.  Would 'that'
> be any better than 'this'?

How about "for more information"?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Mon Sep  4 10:08:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 4 Sep 2000 04:08:27 -0400
Subject: [Python-Dev] ME so mmap
In-Reply-To: <DOEGJPEHJOJKDFNLNCHIKEDJCAAA.audun@mindspring.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEOLHDAA.tim_one@email.msn.com>

Audun S. Runde mailto:audun at mindspring.com wins a Fabulous Prize for being
our first Windows ME tester!  Also our only, and I think he should get
another prize just for that.

The good news is that the creaky old Wise installer worked.  The bad news is
that we've got a Windows-ME-specific std test failure, in test_mmap.

This is from the installer available via anonymous FTP from
python.beopen.com,

     /pub/windows/beopen-python2b1p2-20000901.exe
     5,783,115 bytes

and here's the meat of the bad news in Audun's report:

> PLATFORM 2.
> Windows ME
> (version/build 4.90.3000 aka. "Techical Beta Special Edition"
> -- claimed to be identical to the shipping version),
> no previous Python install
> =============================================================
>
> + Try
>     python lib/test/regrtest.py
>
> --> results:
> 76 tests OK.
> 1 test failed: test_mmap (see below)
> 23 tests skipped (al, cd, cl, crypt, dbm, dl, fcntl, fork1, gdbm, gl, grp,
> imgfile, largefile, linuxaudiodev, minidom, nis, openpty, poll, pty, pwd,
> signal, sunaudiodev, timing)
>
> Rerun of test_mmap.py:
> ----------------------
> C:\Python20\Lib\test>..\..\python test_mmap.py
> Traceback (most recent call last):
>   File "test_mmap.py", line 121, in ?
>     test_both()
>   File "test_mmap.py", line 18, in test_both
>     m = mmap.mmap(f.fileno(), 2 * PAGESIZE)
> WindowsError: [Errno 6] The handle is invalid
>
> C:\Python20\Lib\test>
>
>
> --> Please let me know if there is anything I can do to help with
> --> this -- but I might need detailed instructions ;-)

So we're not even getting off the ground with mmap on ME -- it's dying in
the mmap constructor.  I'm sending this to Mark Hammond directly because he
was foolish enough <wink> to fix many mmap-on-Windows problems, but if any
other developer has access to ME feel free to grab this joy away from him.
There are no reports of test_mmap failing on any other flavor of Windows (&
clean reports from 95, 2000, NT, 98), looks extremely unlikely that it's a
flaw in the installer, and it's a gross problem right at the start.

Best guess now is that it's a bug in ME.  What?  A bug in a new flavor of
Windows?!  Na, couldn't be ...

may-as-well-believe-that-money-doesn't-grow-on-trees-ly y'rs  - tim





From tim_one at email.msn.com  Mon Sep  4 10:49:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 4 Sep 2000 04:49:12 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <200009021500.RAA00776@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEONHDAA.tim_one@email.msn.com>

[Vladimir Marangozov, heroically responds to pleas for Windows help!]

>     /pub/windows/beopen-python2b1p2-20000901.exe
>     5,783,115 bytes
>
> In case my feedback matters, being a Windows amateur,

That's *good*:  amateurs make better testers because they're less prone to
rationalize away problems or gloss over things they needed to fix by hand.

> the installation went smoothly on my home P100

You're kidding, right?  They give away faster processors in cereal boxes now
<wink>.

> with some early Win95 pre-release.

Brrrrrrr.  Even toxic waste dumps won't accept *those* anymore!

> In the great Windows tradition, I was asked to reboot & did so.

That's interesting -- first report of a reboot I've gotten.  But it makes
sense:  everyone else who has tried this is an eager Windows beta tester or
a Python Windows developer, so all their system files are likely up to date.
Windows only makes you reboot if it has to *replace* a system file with a
newer one from the install (unlike Unix, Windows won't let you "unlink" a
file that's in use; that's why they have to replace popular system files
during the reboot, *before* Windows proper starts up).

> The regression tests passed in console mode.

Frankly, I'm amazed!  Please don't test anymore <0.9 wink>.

> Then launched successfully IDLE. In IDLE I get *beep* sounds every
< time I hit RETURN without typing anything.  I was able to close both
> the console and IDLE without problems.

Assuming you saw Guido's msg about the *beep*s.  If not, it's an IDLE buglet
and you're not alone.  Won't be fixed for 2b1, maybe by 2.0.

> Haven't tried the uninstall link, though.

It will work -- kinda.  It doesn't really uninstall everything on any flavor
of Windows.  I think BeOpen.com should agree to buy me an installer newer
than your Win95 prerelease.

> don't-ask-me-any-questions-about-Windows'ly y'rs

I was *going* to, and I still am.  And your score is going on your Permanent
Record, so don't screw this up!  But since you volunteered such a nice and
helpful test report, I'll give you a relatively easy one:  which company
sells Windows?

A. BeOpen PythonLabs
B. ActiveState
C. ReportLabs
D. Microsoft
E. PythonWare
F. Red Hat
G. General Motors
H. Corporation for National Research Initiatives
I. Free Software Foundation
J. Sun Microsystems
K. National Security Agency

hint:-it's-the-only-one-without-an-"e"-ly y'rs  - tim





From nascheme at enme.ucalgary.ca  Mon Sep  4 16:18:28 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 4 Sep 2000 08:18:28 -0600
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>; from Tim Peters on Sun, Sep 03, 2000 at 10:44:28PM -0400
References: <200009040233.VAA27866@sirius> <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>
Message-ID: <20000904081828.B23753@keymaster.enme.ucalgary.ca>

The pthread model does not map will into the Linux clone model.  The
standard seems to assume that threads are implemented as a process.
Linus is adding some extra features in 2.4 which may help (thread
groups).  We will see if the glibc maintainers can make use of these.

I'm thinking of creating a thread_linux header file.  Do you think that
would be a good idea?  clone() seems to be pretty easy to use although
it is quite low level.

  Neil



From guido at beopen.com  Mon Sep  4 17:40:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 04 Sep 2000 10:40:58 -0500
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: Your message of "Mon, 04 Sep 2000 08:18:28 CST."
             <20000904081828.B23753@keymaster.enme.ucalgary.ca> 
References: <200009040233.VAA27866@sirius> <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>  
            <20000904081828.B23753@keymaster.enme.ucalgary.ca> 
Message-ID: <200009041540.KAA23263@cj20424-a.reston1.va.home.com>

> The pthread model does not map will into the Linux clone model.  The
> standard seems to assume that threads are implemented as a process.
> Linus is adding some extra features in 2.4 which may help (thread
> groups).  We will see if the glibc maintainers can make use of these.
> 
> I'm thinking of creating a thread_linux header file.  Do you think that
> would be a good idea?  clone() seems to be pretty easy to use although
> it is quite low level.

This seems nice at first, but probably won't work too well when you
consider embedding Python in applications that use the Posix threads
library.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at alum.mit.edu  Mon Sep  4 17:02:03 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Mon, 4 Sep 2000 10:02:03 -0500
Subject: [Python-Dev] mail sent as "root"
Message-ID: <200009041502.KAA05864@buffalo.fnal.gov>

sorry for the mail sent as "root" - d'oh.  I still am not able to
send mail from fnal.gov to python.org (no route to host) and am
playing some screwy games to get my mail delivered.




From cgw at alum.mit.edu  Mon Sep  4 17:52:42 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Mon, 4 Sep 2000 10:52:42 -0500
Subject: [Python-Dev] Thread problems on Linux
Message-ID: <200009041552.KAA06048@buffalo.fnal.gov>

Neil wrote:

>I'm thinking of creating a thread_linux header file.  Do you think that 
>would be a good idea?  clone() seems to be pretty easy to use although 
>it is quite low level. 
 
Sounds like a lot of work to me.   The pthread library gets us two
things (essentially) - a function to create threads, which you could
pretty easily replace with clone(), and other functions to handle
mutexes and conditions.  If you replace pthread_create with clone
you have a lot of work to do to implement the locking stuff... Of
course, if you're willing to do this work, then more power to you.
But from my point of view, I'm at a site where we're using pthreads
on Linux in non-Python applications as well, so I'm more interested
in diagnosing and trying to fix (or at least putting together a   
detailed and coherent bug report on) the platform bugs, rather than
trying to work around them in the Python interpreter.





From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 20:11:33 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 20:11:33 +0200 (CEST)
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEONHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 04, 2000 04:49:12 AM
Message-ID: <200009041811.UAA21177@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov, heroically responds to pleas for Windows help!]
> 
> That's *good*:  amateurs make better testers because they're less prone to
> rationalize away problems or gloss over things they needed to fix by hand.

Thanks. This is indeed the truth.

> 
> > the installation went smoothly on my home P100
> 
> You're kidding, right?  They give away faster processors in cereal boxes now
> <wink>.

No. I'm proud to possess a working Pentium 100 with the F0 0F bug. This
is a genuine snapshot of the advances of a bunch of technologies at the
end of the XX century.

> 
> > with some early Win95 pre-release.
> 
> Brrrrrrr.  Even toxic waste dumps won't accept *those* anymore!

see above.

> 
> > Haven't tried the uninstall link, though.
> 
> It will work -- kinda.  It doesn't really uninstall everything on any flavor
> of Windows.  I think BeOpen.com should agree to buy me an installer newer
> than your Win95 prerelease.

Wasn't brave enough to reboot once again <wink>.

> 
> > don't-ask-me-any-questions-about-Windows'ly y'rs
> 
> I was *going* to, and I still am.

Seriously, if you need more feedback, you'd have to give me click by click
instructions. I'm in trouble each time I want to do any real work within
the Windows clickodrome.

> And your score is going on your Permanent Record, so don't screw this up!
> But since you volunteered such a nice and helpful test report, I'll give
> you a relatively easy one:  which company sells Windows?
> 
> A. BeOpen PythonLabs
> B. ActiveState
> C. ReportLabs
> D. Microsoft
> E. PythonWare
> F. Red Hat
> G. General Motors
> H. Corporation for National Research Initiatives
> I. Free Software Foundation
> J. Sun Microsystems
> K. National Security Agency
> 
> hint:-it's-the-only-one-without-an-"e"-ly y'rs  - tim
> 

Hm. Thanks for the hint! Let's see. It's not "me" for sure. Could
be "you" though <wink>. I wish it was General Motors...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nascheme at enme.ucalgary.ca  Mon Sep  4 21:28:38 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 4 Sep 2000 13:28:38 -0600
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <200009041504.KAA05892@buffalo.fnal.gov>; from Charles G Waldman on Mon, Sep 04, 2000 at 10:04:40AM -0500
References: <200009041504.KAA05892@buffalo.fnal.gov>
Message-ID: <20000904132838.A25571@keymaster.enme.ucalgary.ca>

On Mon, Sep 04, 2000 at 10:04:40AM -0500, Charles G Waldman wrote:
>If you replace pthread_create with clone you have a lot of work to do
>to implement the locking stuff...

Locks exist in /usr/include/asm.  It is Linux specific but so is
clone().

  Neil



From thomas at xs4all.net  Mon Sep  4 22:14:39 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 4 Sep 2000 22:14:39 +0200
Subject: [Python-Dev] Vacation
Message-ID: <20000904221438.U12695@xs4all.nl>

I'll be offline for two weeks, enjoying a sunny (hopfully!) holiday in
southern Italy. I uploaded the docs I had for augmented assignment; not
terribly much I'm afraid :P We had some trouble at work over the weekend,
which cost me most of the time I thought I had to finish some of this up.

(For the developers among you that, like me, do a bit of sysadmining on the
side: one of our nameservers was hacked, either through password-guessing
(unlikely), sniffing (unlikely), a hole in ssh (1.2.26, possible but
unlikely) or a hole in named (BIND 8.2.2-P5, very unlikely). There was a
copy of the named binary in /tmp under an obscure filename, which leads us
to believe it was the latter -- which scares the shit out of me personally,
as anything before P3 was proven to be insecure, and the entire sane world
and their dog runs P5. Possibly it was 'just' a bug in Linux/RedHat, though.
Cleaning up after scriptkiddies, a great way to spend your weekend before
your vacation, let me tell you! :P)

I'll be back on the 19th, plenty of time left to do beta testing after that
:)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From rob at hooft.net  Tue Sep  5 08:15:04 2000
From: rob at hooft.net (Rob W. W. Hooft)
Date: Tue, 5 Sep 2000 08:15:04 +0200 (CEST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.52,1.53
In-Reply-To: <200009050438.VAA03390@slayer.i.sourceforge.net>
References: <200009050438.VAA03390@slayer.i.sourceforge.net>
Message-ID: <14772.36712.451676.957918@temoleh.chem.uu.nl>

! Augmented Assignment
! --------------------
!
! This must have been the most-requested feature of the past years!
! Eleven new assignment operators were added:
!
!     += -+ *= /= %= **= <<= >>= &= ^= |=

Interesting operator "-+" in there! I won't submit this as patch
to sourceforge....

Index: dist/src/Misc/NEWS
===================================================================
RCS file: /cvsroot/python/python/dist/src/Misc/NEWS,v
retrieving revision 1.53
diff -u -c -r1.53 NEWS
cvs server: conflicting specifications of output style
*** dist/src/Misc/NEWS  2000/09/05 04:38:34     1.53
--- dist/src/Misc/NEWS  2000/09/05 06:14:16
***************
*** 66,72 ****
  This must have been the most-requested feature of the past years!
  Eleven new assignment operators were added:
  
!     += -+ *= /= %= **= <<= >>= &= ^= |=
  
  For example,
  
--- 66,72 ----
  This must have been the most-requested feature of the past years!
  Eleven new assignment operators were added:
  
!     += -= *= /= %= **= <<= >>= &= ^= |=
  
  For example,
  


Regards,

Rob Hooft

-- 
=====   rob at hooft.net          http://www.hooft.net/people/rob/  =====
=====   R&D, Nonius BV, Delft  http://www.nonius.nl/             =====
===== PGPid 0xFA19277D ========================== Use Linux! =========



From bwarsaw at beopen.com  Tue Sep  5 09:23:55 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 5 Sep 2000 03:23:55 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.52,1.53
References: <200009050438.VAA03390@slayer.i.sourceforge.net>
	<14772.36712.451676.957918@temoleh.chem.uu.nl>
Message-ID: <14772.40843.669856.756485@anthem.concentric.net>

>>>>> "RWWH" == Rob W W Hooft <rob at hooft.net> writes:

    RWWH> Interesting operator "-+" in there! I won't submit this as
    RWWH> patch to sourceforge....

It's Python 2.0's way of writing "no op" :)

I've already submitted this internally.  Doubt it will make it into
2.0b1, but we'll get it into 2.0 final.

-Barry



From mbel44 at dial.pipex.net  Tue Sep  5 13:19:42 2000
From: mbel44 at dial.pipex.net (Toby Dickenson)
Date: Tue, 05 Sep 2000 12:19:42 +0100
Subject: [Python-Dev] Re: [I18n-sig] ustr
In-Reply-To: <200007071244.HAA03694@cj20424-a.reston1.va.home.com>
References: <r39bmsc6remdupiv869s5agm46m315ebeq@4ax.com>   <3965BBE5.D67DD838@lemburg.com> <200007071244.HAA03694@cj20424-a.reston1.va.home.com>
Message-ID: <vhl9rsclpk9e89oaeehpg7sec79ar8cdru@4ax.com>

On Fri, 07 Jul 2000 07:44:03 -0500, Guido van Rossum
<guido at beopen.com> wrote:

We debated a ustr function in July. Does anyone have this in hand? I
can prepare a patch if necessary.

>> Toby Dickenson wrote:
>> > 
>> > I'm just nearing the end of getting Zope to play well with unicode
>> > data. Most of the changes involved replacing a call to str, in
>> > situations where either a unicode or narrow string would be
>> > acceptable.
>> > 
>> > My best alternative is:
>> > 
>> > def convert_to_something_stringlike(x):
>> >     if type(x)==type(u''):
>> >         return x
>> >     else:
>> >         return str(x)
>> > 
>> > This seems like a fundamental operation - would it be worth having
>> > something similar in the standard library?
>
>Marc-Andre Lemburg replied:
>
>> You mean: for Unicode return Unicode and for everything else
>> return strings ?
>> 
>> It doesn't fit well with the builtins str() and unicode(). I'd
>> say, make this a userland helper.
>
>I think this would be helpful to have in the std library.  Note that
>in JPython, you'd already use str() for this, and in Python 3000 this
>may also be the case.  At some point in the design discussion for the
>current Unicode support we also thought that we wanted str() to do
>this (i.e. allow 8-bit and Unicode string returns), until we realized
>that there were too many places that would be very unhappy if str()
>returned a Unicode string!
>
>The problem is similar to a situation you have with numbers: sometimes
>you want a coercion that converts everything to float except it should
>leave complex numbers complex.  In other words it coerces up to float
>but it never coerces down to float.  Luckily you can write that as
>"x+0.0" while converts int and long to float with the same value while
>leaving complex alone.
>
>For strings there is no compact notation like "+0.0" if you want to
>convert to string or Unicode -- adding "" might work in Perl, but not
>in Python.
>
>I propose ustr(x) with the semantics given by Toby.  Class support (an
>__ustr__ method, with fallbacks on __str__ and __unicode__) would also
>be handy.


Toby Dickenson
tdickenson at geminidataloggers.com



From guido at beopen.com  Tue Sep  5 16:29:44 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 09:29:44 -0500
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
Message-ID: <200009051429.JAA19296@cj20424-a.reston1.va.home.com>

Folks,

After a Labor Day weekend ful excitement, I have good news and bad
news.

The good news is that both Python 1.6 and Python 2.0b1 will be
released today (in *some* US timezone :-).  The former from
python.org, the latter from pythonlabs.com.

The bad news is that there's still no agreement from Stallman that the
CNRI open source license is GPL-compatible.  See my previous post
here.  (Re: Conflict with the GPL.)  Given that we still don't know
that dual licensing will be necessary and sufficient to make the 2.0
license GPL-compatible, we decided not to go for dual licensing just
yet -- if it transpires later that it is necessary, we'll add it to
the 2.0 final license.

At this point, our best shot seems to be to arrange a meeting between
CNRI's lawyer and Stallman's lawyer.  Without the lawyers there, we
never seem to be able to get a commitment to an agreement.  CNRI is
willing to do this; Stallman's lawyer (Eben Moglen; he's a law
professor at Columbia U, not NYU as I previously mentioned) is even
harder to get a hold of than Stallman himself, so it may be a while.
Given CNRI's repeatedly expressed commitment to move this forward, I
don't want to hold up any of the releases that were planned for today
any longer.

So look forward to announcements later today, and get out the
(qualified) champagne...!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep  5 16:30:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 09:30:32 -0500
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
Message-ID: <200009051430.JAA19323@cj20424-a.reston1.va.home.com>

Folks,

After a Labor Day weekend ful excitement, I have good news and bad
news.

The good news is that both Python 1.6 and Python 2.0b1 will be
released today (in *some* US timezone :-).  The former from
python.org, the latter from pythonlabs.com.

The bad news is that there's still no agreement from Stallman that the
CNRI open source license is GPL-compatible.  See my previous post
here.  (Re: Conflict with the GPL.)  Given that we still don't know
that dual licensing will be necessary and sufficient to make the 2.0
license GPL-compatible, we decided not to go for dual licensing just
yet -- if it transpires later that it is necessary, we'll add it to
the 2.0 final license.

At this point, our best shot seems to be to arrange a meeting between
CNRI's lawyer and Stallman's lawyer.  Without the lawyers there, we
never seem to be able to get a commitment to an agreement.  CNRI is
willing to do this; Stallman's lawyer (Eben Moglen; he's a law
professor at Columbia U, not NYU as I previously mentioned) is even
harder to get a hold of than Stallman himself, so it may be a while.
Given CNRI's repeatedly expressed commitment to move this forward, I
don't want to hold up any of the releases that were planned for today
any longer.

So look forward to announcements later today, and get out the
(qualified) champagne...!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Tue Sep  5 16:17:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 16:17:36 +0200 (CEST)
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <200009051430.JAA19323@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 05, 2000 09:30:32 AM
Message-ID: <200009051417.QAA27424@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Folks,
> 
> After a Labor Day weekend ful excitement, I have good news and bad
> news.

Don'it worry about the bad news! :-)

> 
> The good news is that both Python 1.6 and Python 2.0b1 will be
> released today (in *some* US timezone :-).  The former from
> python.org, the latter from pythonlabs.com.

Great! w.r.t. the latest demand for help with patches, tell us
what & whom patches you want among those you know about.

> 
> The bad news is that there's still no agreement from Stallman that the
> CNRI open source license is GPL-compatible.

This is no surprise.  I don't think they will agree any time soon.
If they do so by the end of the year, that would make us happy, though.

> So look forward to announcements later today, and get out the
> (qualified) champagne...!

Ahem, which one?
Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Mill?sim?? :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Tue Sep  5 16:16:39 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 5 Sep 2000 09:16:39 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.53,1.54
In-Reply-To: <200009051242.FAA13258@slayer.i.sourceforge.net>
References: <200009051242.FAA13258@slayer.i.sourceforge.net>
Message-ID: <14773.71.989338.110654@beluga.mojam.com>


    Guido> I could use help here!!!!  Please mail me patches ASAP.  We may have
    Guido> to put some of this off to 2.0final, but it's best to have it in shape
    Guido> now...

Attached.

Skip

-------------- next part --------------
A non-text attachment was scrubbed...
Name: news.patch
Type: application/octet-stream
Size: 539 bytes
Desc: note about readline history
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000905/3ad2996f/attachment.obj>

From jeremy at beopen.com  Tue Sep  5 16:58:46 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 10:58:46 -0400 (EDT)
Subject: [Python-Dev] malloc restructuring in 1.6
Message-ID: <14773.2598.24665.940797@bitdiddle.concentric.net>

I'm editing the NEWS file for 2.0 and noticed that Vladimir's malloc
changes are listed as new for 2.0.  I think they actually went into
1.6, but I'm not certain.  Can anyone confirm?

Jeremy



From petrilli at amber.org  Tue Sep  5 17:19:05 2000
From: petrilli at amber.org (Christopher Petrilli)
Date: Tue, 5 Sep 2000 11:19:05 -0400
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <200009051417.QAA27424@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Sep 05, 2000 at 04:17:36PM +0200
References: <200009051430.JAA19323@cj20424-a.reston1.va.home.com> <200009051417.QAA27424@python.inrialpes.fr>
Message-ID: <20000905111904.A14540@trump.amber.org>

Vladimir Marangozov [Vladimir.Marangozov at inrialpes.fr] wrote:
> Ahem, which one?
> Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Mill?sim?? :-)

Given the involvement of Richard Stallman, and its similarity to a
peace accord during WWII, I'd vote for Pol Roger Sir Winston Churchill 
cuvee :-)

Chris

-- 
| Christopher Petrilli
| petrilli at amber.org



From Vladimir.Marangozov at inrialpes.fr  Tue Sep  5 17:38:47 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 17:38:47 +0200 (CEST)
Subject: [Python-Dev] malloc restructuring in 1.6
In-Reply-To: <14773.2598.24665.940797@bitdiddle.concentric.net> from "Jeremy Hylton" at Sep 05, 2000 10:58:46 AM
Message-ID: <200009051538.RAA27615@python.inrialpes.fr>

Jeremy Hylton wrote:
> 
> I'm editing the NEWS file for 2.0 and noticed that Vladimir's malloc
> changes are listed as new for 2.0.  I think they actually went into
> 1.6, but I'm not certain.  Can anyone confirm?

Yes, they're in 1.6.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Tue Sep  5 18:02:51 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 18:02:51 +0200 (CEST)
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <20000905111904.A14540@trump.amber.org> from "Christopher Petrilli" at Sep 05, 2000 11:19:05 AM
Message-ID: <200009051602.SAA27759@python.inrialpes.fr>

Christopher Petrilli wrote:
> 
> Vladimir Marangozov [Vladimir.Marangozov at inrialpes.fr] wrote:
> > Ahem, which one?
> > Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Mill?sim?? :-)
> 
> Given the involvement of Richard Stallman, and its similarity to a
> peace accord during WWII, I'd vote for Pol Roger Sir Winston Churchill 
> cuvee :-)
> 

Ah. That would have been my pleasure, but I am out of stock for this one.
Sorry. However, I'll make sure to order a bottle and keep it ready in my
cellar for the ratification of the final license. In the meantime, the
above is the best I can offer -- the rest is cheap stuff to be consumed
only on bad news <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From jeremy at beopen.com  Tue Sep  5 20:43:04 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 14:43:04 -0400 (EDT)
Subject: [Python-Dev] checkin messages that reference SF bugs or patches
Message-ID: <14773.16056.958855.185889@bitdiddle.concentric.net>

If you commit a change that closes an SF bug or patch, please write a
checkin message that describes the change independently of the
information stored in SF.  You should also reference the bug or patch
id, but the id alone is not sufficient.

I am working on the NEWS file for Python 2.0 and have found a few
checkin messages that just said "SF patch #010101."  It's tedious to
go find the closed patch entry and read all the discussion.  Let's
assume the person reading the CVS log does not have access to the SF
databases. 

Jeremy



From akuchlin at mems-exchange.org  Tue Sep  5 20:57:05 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 5 Sep 2000 14:57:05 -0400
Subject: [Python-Dev] Updated version of asyncore.py?
Message-ID: <20000905145705.A2512@kronos.cnri.reston.va.us>

asyncore.py in the CVS tree is revision 2.40 1999/05/27, while Sam
Rushing's most recent tarball contains revision 2.49 2000/05/04.  The
major change is that lots of methods in 2.49 have an extra optional
argument, map=None.  (I noticed the discrepancy while packaging ZEO,
which assumes the most recent version.)

asynchat.py is also slightly out of date: 
< #     Id: asynchat.py,v 2.23 1999/05/01 04:49:24 rushing Exp
---
> #     $Id: asynchat.py,v 2.25 1999/11/18 11:01:08 rushing Exp $

The CVS versions have additional docstrings and a few typo fixes in
comments.  Should the Python library versions be updated?  (+1 from
me, obviously.)

--amk



From martin at loewis.home.cs.tu-berlin.de  Tue Sep  5 22:46:16 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 5 Sep 2000 22:46:16 +0200
Subject: [Python-Dev] Re: urllib.URLopener does not work with proxies (Bug 110692)
Message-ID: <200009052046.WAA03605@loewis.home.cs.tu-berlin.de>

Hi Andrew,

This is likely incorrect usage of the module. The proxy argument must
be a dictionary mapping strings of protocol names to  strings of URLs.

Please confirm whether this was indeed the problem; if not, please add
more detail as to how exactly you had used the module.

See

http://sourceforge.net/bugs/?func=detailbug&bug_id=110692&group_id=5470

for the status of this report; it would be appreciated if you recorded
any comments on that page.

Regards,
Martin




From guido at cj20424-a.reston1.va.home.com  Tue Sep  5 20:49:38 2000
From: guido at cj20424-a.reston1.va.home.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 13:49:38 -0500
Subject: [Python-Dev] Python 1.6, the final release, is out!
Message-ID: <200009051849.NAA01719@cj20424-a.reston1.va.home.com>

------- Blind-Carbon-Copy

To: python-list at python.org (Python mailing list),
    python-announce-list at python.org
Subject: Python 1.6, the final release, is out!
From: Guido van Rossum <guido at beopen.com>
Date: Tue, 05 Sep 2000 13:49:38 -0500
Sender: guido at cj20424-a.reston1.va.home.com

OK folks, believe it or not, Python 1.6 is released.

Please go here to pick it up:

    http://www.python.org/1.6/

There's a tarball and a Windows installer, and a long list of new
features.

CNRI has placed an open source license on this version.  CNRI believes
that this version is compatible with the GPL, but there is a
technicality concerning the choice of law provision, which Richard
Stallman believes may make it incompatible.  CNRI is still trying to
work this out with Stallman.  Future versions of Python will be
released by BeOpen PythonLabs under a GPL-compatible license if at all
possible.

There's Only One Way To Do It.

- --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

------- End of Blind-Carbon-Copy



From martin at loewis.home.cs.tu-berlin.de  Wed Sep  6 00:03:16 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 6 Sep 2000 00:03:16 +0200
Subject: [Python-Dev] undefined symbol in custom interpeter (Bug 110701)
Message-ID: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>

Your PR is now being tracked at

http://sourceforge.net/bugs/?func=detailbug&bug_id=110701&group_id=5470

This is not a bug in Python. When linking a custom interpreter, you
need to make sure all symbols are exported to modules. On FreeBSD, you
do this by adding -Wl,--export-dynamic to the linker line.

Can someone please close this report?

Martin



From jeremy at beopen.com  Wed Sep  6 00:20:07 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 18:20:07 -0400 (EDT)
Subject: [Python-Dev] undefined symbol in custom interpeter (Bug 110701)
In-Reply-To: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>
References: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>
Message-ID: <14773.29079.142749.496111@bitdiddle.concentric.net>

Closed it.  Thanks.

Jeremy



From skip at mojam.com  Wed Sep  6 00:38:02 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 5 Sep 2000 17:38:02 -0500 (CDT)
Subject: [Python-Dev] Updated version of asyncore.py?
In-Reply-To: <20000905145705.A2512@kronos.cnri.reston.va.us>
References: <20000905145705.A2512@kronos.cnri.reston.va.us>
Message-ID: <14773.30154.924465.632830@beluga.mojam.com>

    Andrew> The CVS versions have additional docstrings and a few typo fixes
    Andrew> in comments.  Should the Python library versions be updated?
    Andrew> (+1 from me, obviously.)

+1 from me as well.  I think asyncore.py and asynchat.py are important
enough to a number of packages that we ought to make the effort to keep the
Python-distributed versions up-to-date.  I suspect adding Sam as a developer
would make keeping it updated in CVS much easier than in the past.

Skip



From guido at beopen.com  Wed Sep  6 06:49:27 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 23:49:27 -0500
Subject: [Python-Dev] Python 2.0b1 is released!
Message-ID: <200009060449.XAA02145@cj20424-a.reston1.va.home.com>

A unique event in all the history of Python: two releases on the same
day!  (At least in my timezone. :-)

Python 2.0b1 is released.  The BeOpen PythonLabs and our cast of
SourceForge volunteers have been working on this version on which
since May.  Please go here to pick it up:

    http://www.pythonlabs.com/tech/python2.0/

There's a tarball and a Windows installer, online documentation (with
a new color scheme :-), RPMs, and a long list of new features.  OK, a
teaser:

  - Augmented assignment, e.g. x += 1
  - List comprehensions, e.g. [x**2 for x in range(10)]
  - Extended import statement, e.g. import Module as Name
  - Extended print statement, e.g. print >> file, "Hello"
  - Optional collection of cyclical garbage

There's one bit of sad news: according to Richard Stallman, this
version is no more compatible with the GPL than version 1.6 that was
released this morning by CNRI, because of a technicality concerning
the choice of law provision in the CNRI license.  Because 2.0b1 has to
be considered a derivative work of 1.6, this technicality in the CNRI
license applies to 2.0 too (and to any other derivative works of 1.6).
CNRI is still trying to work this out with Stallman, so I hope that we
will be able to release future versions of Python under a
GPL-compatible license.

There's Only One Way To Do It.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at fnal.gov  Wed Sep  6 16:31:11 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 09:31:11 -0500 (CDT)
Subject: [Python-Dev] newimp.py
Message-ID: <14774.21807.691920.988409@buffalo.fnal.gov>

Installing the brand-new 2.0b1 I see this:

Compiling /usr/lib/python2.0/newimp.py ...
  File "/usr/lib/python2.0/newimp.py", line 137
    envDict[varNm] = val
                        ^
And attempting to import it gives me:

Python 2.0b1 (#14, Sep  6 2000, 09:24:44) 
[GCC 2.96 20000905 (experimental)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import newimp
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/newimp.py", line 1567, in ?
    init()
  File "/usr/lib/python2.0/newimp.py", line 203, in init
    if (not aMod.__dict__.has_key(PKG_NM)) or full_reset:
AttributeError: 'None' object has no attribute '__dict__'

This code was last touched on 1995/07/12.  It looks defunct to me.
Should it be removed from the distribution or should I spend the time
to fix it?





From skip at mojam.com  Wed Sep  6 17:12:56 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 6 Sep 2000 10:12:56 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.21807.691920.988409@buffalo.fnal.gov>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
Message-ID: <14774.24312.78161.249542@beluga.mojam.com>

    Charles> This code was last touched on 1995/07/12.  It looks defunct to
    Charles> me.  Should it be removed from the distribution or should I
    Charles> spend the time to fix it?

Charles,

Try deleting /usr/lib/python2.0/newimp.py, then do a re-install.  (Actually,
perhaps you should delete *.py in that directory and selectively delete
subdirectories as well.)  I don't see newimp.py anywhere in the 2.0b1 tree.

Skip



From cgw at fnal.gov  Wed Sep  6 19:56:44 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 12:56:44 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.24312.78161.249542@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
Message-ID: <14774.34140.432485.450929@buffalo.fnal.gov>

Skip Montanaro writes:

 > Try deleting /usr/lib/python2.0/newimp.py, then do a re-install.  (Actually,
 > perhaps you should delete *.py in that directory and selectively delete
 > subdirectories as well.)  I don't see newimp.py anywhere in the 2.0b1 tree.

Something is really screwed up with CVS, or my understanding of it.
Look at this transcript:

buffalo:Lib$ pwd
/usr/local/src/Python-CVS/python/dist/src/Lib

buffalo:Lib$ rm newimp.py                                                      

buffalo:Lib$ cvs status newimp.py                                              
===================================================================
File: no file newimp.py         Status: Needs Checkout

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/newimp.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)

buffalo:Lib$ cvs update -dAP                                                   
cvs server: Updating .
U newimp.py
<rest of update output omitted>

buffalo:Lib$ ls -l newimp.py                                                   
-rwxr-xr-x   1 cgw      g023        54767 Sep  6 12:50 newimp.py

buffalo:Lib$ cvs status newimp.py 
===================================================================
File: newimp.py         Status: Up-to-date

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/newimp.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)

If I edit the CVS/Entries file and remove "newimp.py" from there, the
problem goes away.  But I work with many CVS repositories, and the
Python repository at SourceForge is the only one that forces me to
manually edit the Entries file.  You're really not supposed to need to
do that!

I'm running CVS version 1.10.6.  I think 1.10.6 is supposed to be a
"good" version to use.  What are other people using?  Does everybody
just go around editing CVS/Entries whenever files are removed from the
repository?  What am I doing wrong?  I'm starting to get a little
annoyed by the SourceForge CVS server.  Is it just me?







From nascheme at enme.ucalgary.ca  Wed Sep  6 20:06:29 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 6 Sep 2000 12:06:29 -0600
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.34140.432485.450929@buffalo.fnal.gov>; from Charles G Waldman on Wed, Sep 06, 2000 at 12:56:44PM -0500
References: <14774.21807.691920.988409@buffalo.fnal.gov> <14774.24312.78161.249542@beluga.mojam.com> <14774.34140.432485.450929@buffalo.fnal.gov>
Message-ID: <20000906120629.B1977@keymaster.enme.ucalgary.ca>

On Wed, Sep 06, 2000 at 12:56:44PM -0500, Charles G Waldman wrote:
> Something is really screwed up with CVS, or my understanding of it.

The latter I believe unless I completely misunderstand your transcript.
Look at "cvs remove".

  Neil



From cgw at fnal.gov  Wed Sep  6 20:19:50 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 13:19:50 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <20000906120629.B1977@keymaster.enme.ucalgary.ca>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
Message-ID: <14774.35526.470896.324060@buffalo.fnal.gov>

Neil wrote:
 
 >Look at "cvs remove".

Sorry, I must have my "stupid" bit set today (didn't sleep enough last
night).  Do you mean that I'm supposed to cvs remove the file?  AFAIK,
when I do a "cvs update" that should remove all files that are no
longer pertinent.  Guido (or somebody else with CVS write access) does
the "cvs remove" and "cvs commit", and then when I do my next 
"cvs update" my local copy of the file should be removed.  At least
that's the way it works with all the other projects I track via CVS.

And of course if I try to "cvs remove newimp.py", I get: 

cvs [server aborted]: "remove" requires write access to the repository

as I would expect.

Or are you simply telling me that if I read the documentation on the
"cvs remove" command, the scales will fall from my eyes?  I've read
it, and it doesn't help :-(

Sorry for bugging everybody with my stupid CVS questions.  But I do
really think that something is screwy with the CVS repository.  And
I've never seen *any* documentation which suggests that you need to
manually edit the CVS/Entries file, which was Fred Drake's suggested
fix last time I reported such a problem with CVS.

Oh well, if this only affects me, then I guess the burden of proof is
on me.  Meanwhile I guess I just have to remember that I can't really
trust CVS to delete obsolete files.






From skip at mojam.com  Wed Sep  6 20:49:56 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 6 Sep 2000 13:49:56 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.35526.470896.324060@buffalo.fnal.gov>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
Message-ID: <14774.37332.534262.200618@beluga.mojam.com>

    Charles> Oh well, if this only affects me, then I guess the burden of
    Charles> proof is on me.  Meanwhile I guess I just have to remember that
    Charles> I can't really trust CVS to delete obsolete files.

Charles,

I'm not sure what to make of your problem.  I can't reproduce it.  On the
Linux systems from which I track the CVS repository, I run cvs 1.10.6,
1.10.7 and 1.10.8 and haven't had seen the problem you describe.  I checked
six different Python trees on four different machines for evidence of
Lib/newimp.py.  One of the trees still references cvs.python.org and hasn't
been updated since September 4, 1999.  Even it doesn't have a Lib/newimp.py
file.  I believe the demise of Lib/newimp.py predates the creation of the
SourceForge CVS repository by quite awhile.

You might try executing cvs checkout in a fresh directory and compare that
with your problematic tree.

Skip



From cgw at fnal.gov  Wed Sep  6 21:10:48 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 14:10:48 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.37332.534262.200618@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
	<14774.37332.534262.200618@beluga.mojam.com>
Message-ID: <14774.38584.869242.974864@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 > I'm not sure what to make of your problem.  I can't reproduce it.  On the
 > Linux systems from which I track the CVS repository, I run cvs 1.10.6,
 > 1.10.7 and 1.10.8 and haven't had seen the problem you describe.

How about if you go to one of those CVS trees, cd Lib, and type
"cvs update newimp.py" ?

If I check out a new tree, "newimp.py" is indeed not there.  But if I
do "cvs update newimp.py" it appears.  I am sure that this is *not*
the correct behavior for CVS.  If a file has been cvs remove'd, then
updating it should not cause it to appear in my local repository.






From cgw at fnal.gov  Wed Sep  6 22:40:47 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 15:40:47 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.43898.548664.200202@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
	<14774.37332.534262.200618@beluga.mojam.com>
	<14774.38584.869242.974864@buffalo.fnal.gov>
	<14774.43898.548664.200202@beluga.mojam.com>
Message-ID: <14774.43983.70263.934682@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 >     Charles> How about if you go to one of those CVS trees, cd Lib, and type
 >     Charles> "cvs update newimp.py" ?
 > 
 > I get 
 > 
 >     beluga:Lib% cd ~/src/python/dist/src/Lib/
 >     beluga:Lib% cvs update newinp.py
 >     cvs server: nothing known about newinp.py

That's because you typed "newinp", not "newimp".  Try it with an "M"
and see what happens.

    -C




From effbot at telia.com  Wed Sep  6 23:17:37 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 6 Sep 2000 23:17:37 +0200
Subject: [Python-Dev] newimp.py
References: <14774.21807.691920.988409@buffalo.fnal.gov><14774.24312.78161.249542@beluga.mojam.com><14774.34140.432485.450929@buffalo.fnal.gov><20000906120629.B1977@keymaster.enme.ucalgary.ca><14774.35526.470896.324060@buffalo.fnal.gov><14774.37332.534262.200618@beluga.mojam.com><14774.38584.869242.974864@buffalo.fnal.gov><14774.43898.548664.200202@beluga.mojam.com> <14774.43983.70263.934682@buffalo.fnal.gov>
Message-ID: <04bd01c01847$e9a197c0$766940d5@hagrid>

charles wrote:
>  >     Charles> How about if you go to one of those CVS trees, cd Lib, and type
>  >     Charles> "cvs update newimp.py" ?

why do you keep doing that? ;-)

> That's because you typed "newinp", not "newimp".  Try it with an "M"
> and see what happens.

the file has state "Exp".  iirc, it should be "dead" for CVS
to completely ignore it.

guess it was removed long before the CVS repository was
moved to source forge, and that something went wrong
somewhere in the process...

</F>




From cgw at fnal.gov  Wed Sep  6 23:08:09 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 16:08:09 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.44642.258108.758548@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
	<14774.37332.534262.200618@beluga.mojam.com>
	<14774.38584.869242.974864@buffalo.fnal.gov>
	<14774.43898.548664.200202@beluga.mojam.com>
	<14774.43983.70263.934682@buffalo.fnal.gov>
	<14774.44642.258108.758548@beluga.mojam.com>
Message-ID: <14774.45625.177110.349575@buffalo.fnal.gov>

Skip Montanaro writes:

 > Ah, yes, I get something:
 > 
 >     beluga:Lib% cvs update newimp.py
 >     U newimp.py
 >     beluga:Lib% ls -l newimp.py 
 >     -rwxrwxr-x    1 skip     skip        54767 Jul 12  1995 newimp.py

 > Why newimp.py is still available, I have no idea.  Note the beginning of the
 > module's doc string:

It's clear that the file is quite obsolete.  It's been moved to the
Attic, and the most recent tag on it is r13beta1.

What's not clear is why "cvs update" still fetches it.

Something is way screwy with SourceForge's CVS server, I'm tellin' ya!

Maybe it's running on a Linux box and uses the pthreads library?  ;-)

I guess since the server is at SourceForge, it's not really under
immediate control of anybody at either python.org or
BeOpen/PythonLabs, so it doesn't seem very likely that this will get
looked into anytime soon.  Sigh....






From guido at beopen.com  Thu Sep  7 05:07:09 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 06 Sep 2000 22:07:09 -0500
Subject: [Python-Dev] newimp.py
In-Reply-To: Your message of "Wed, 06 Sep 2000 23:17:37 +0200."
             <04bd01c01847$e9a197c0$766940d5@hagrid> 
References: <14774.21807.691920.988409@buffalo.fnal.gov><14774.24312.78161.249542@beluga.mojam.com><14774.34140.432485.450929@buffalo.fnal.gov><20000906120629.B1977@keymaster.enme.ucalgary.ca><14774.35526.470896.324060@buffalo.fnal.gov><14774.37332.534262.200618@beluga.mojam.com><14774.38584.869242.974864@buffalo.fnal.gov><14774.43898.548664.200202@beluga.mojam.com> <14774.43983.70263.934682@buffalo.fnal.gov>  
            <04bd01c01847$e9a197c0$766940d5@hagrid> 
Message-ID: <200009070307.WAA07393@cj20424-a.reston1.va.home.com>

> the file has state "Exp".  iirc, it should be "dead" for CVS
> to completely ignore it.
> 
> guess it was removed long before the CVS repository was
> moved to source forge, and that something went wrong
> somewhere in the process...

Could've been an old version of CVS.

Anyway, I checked it out, rm'ed it, cvs-rm'ed it, and committed it --
that seems to have taken care of it.

I hope the file wasn't in any beta distribution.  Was it?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From sjoerd at oratrix.nl  Thu Sep  7 12:40:28 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Thu, 07 Sep 2000 12:40:28 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules cPickle.c,2.50,2.51
In-Reply-To: Your message of Wed, 06 Sep 2000 17:11:43 -0700.
             <200009070011.RAA09907@slayer.i.sourceforge.net> 
References: <200009070011.RAA09907@slayer.i.sourceforge.net> 
Message-ID: <20000907104029.2B35031047C@bireme.oratrix.nl>

This doesn't work.  Neither m nor d are initialized at this point.

On Wed, Sep 6 2000 Guido van Rossum wrote:

> Update of /cvsroot/python/python/dist/src/Modules
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv9746
> 
> Modified Files:
> 	cPickle.c 
> Log Message:
> Simple fix from Jin Fulton to avoid returning a half-initialized
> module when e.g. copy_reg.py doesn't exist.  This caused a core dump.
> 
> This closes SF bug 112944.
> 
> 
> Index: cPickle.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Modules/cPickle.c,v
> retrieving revision 2.50
> retrieving revision 2.51
> diff -C2 -r2.50 -r2.51
> *** cPickle.c	2000/08/12 20:58:11	2.50
> --- cPickle.c	2000/09/07 00:11:40	2.51
> ***************
> *** 4522,4525 ****
> --- 4522,4527 ----
>       PyObject *compatible_formats;
>   
> +     if (init_stuff(m, d) < 0) return;
> + 
>       Picklertype.ob_type = &PyType_Type;
>       Unpicklertype.ob_type = &PyType_Type;
> ***************
> *** 4543,4547 ****
>       Py_XDECREF(format_version);
>       Py_XDECREF(compatible_formats);
> - 
> -     init_stuff(m, d);
>   }
> --- 4545,4547 ----
> 
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From thomas.heller at ion-tof.com  Thu Sep  7 15:42:01 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Thu, 7 Sep 2000 15:42:01 +0200
Subject: [Python-Dev] SF checkin policies
Message-ID: <02a401c018d1$669fbcf0$4500a8c0@thomasnb>

What are the checkin policies to the sourceforge
CVS repository?

Now that I have checkin rights (for the distutils),
I'm about to checkin new versions of the bdist_wininst
command. This is still under active development.

Should CVS only contain complete, working versions?
Or are intermediate, nonworking versions allowed?
Will a warning be given here on python-dev just before
a new (beta) distribution is created?

Thomas Heller






From fredrik at pythonware.com  Thu Sep  7 16:04:13 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 16:04:13 +0200
Subject: [Python-Dev] SF checkin policies
References: <02a401c018d1$669fbcf0$4500a8c0@thomasnb>
Message-ID: <025501c018d4$81301800$0900a8c0@SPIFF>

> What are the checkin policies to the sourceforge
> CVS repository?

http://python.sourceforge.net/peps/pep-0200.html

    Use good sense when committing changes.  You should know what we
    mean by good sense or we wouldn't have given you commit privileges
    <0.5 wink>.

    /.../

    Any significant new feature must be described in a PEP and
    approved before it is checked in.

    /.../

    Any significant code addition, such as a new module or large
    patch, must include test cases for the regression test and
    documentation.  A patch should not be checked in until the tests
    and documentation are ready.

    /.../

    It is not acceptable for any checked in code to cause the
    regression test to fail.  If a checkin causes a failure, it must
    be fixed within 24 hours or it will be backed out.

</F>




From guido at beopen.com  Thu Sep  7 17:50:25 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 10:50:25 -0500
Subject: [Python-Dev] SF checkin policies
In-Reply-To: Your message of "Thu, 07 Sep 2000 15:42:01 +0200."
             <02a401c018d1$669fbcf0$4500a8c0@thomasnb> 
References: <02a401c018d1$669fbcf0$4500a8c0@thomasnb> 
Message-ID: <200009071550.KAA09309@cj20424-a.reston1.va.home.com>

> What are the checkin policies to the sourceforge
> CVS repository?
> 
> Now that I have checkin rights (for the distutils),
> I'm about to checkin new versions of the bdist_wininst
> command. This is still under active development.
> 
> Should CVS only contain complete, working versions?
> Or are intermediate, nonworking versions allowed?
> Will a warning be given here on python-dev just before
> a new (beta) distribution is created?

Please check in only working, tested code!  There are lots of people
(also outside the developers group) who do daily checkouts.  If they
get broken code, they'll scream hell!

We publicize and discuss the release schedule pretty intensely here so
you should have plenty of warning.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Thu Sep  7 17:59:40 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 7 Sep 2000 17:59:40 +0200 (CEST)
Subject: [Python-Dev] newimp.py
In-Reply-To: <200009070307.WAA07393@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 06, 2000 10:07:09 PM
Message-ID: <200009071559.RAA06832@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Anyway, I checked it out, rm'ed it, cvs-rm'ed it, and committed it --
> that seems to have taken care of it.
> 
> I hope the file wasn't in any beta distribution.  Was it?

No. There's a .cvsignore file in the root directory of the latest
tarball, though. Not a big deal.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Thu Sep  7 18:46:11 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 7 Sep 2000 18:46:11 +0200 (CEST)
Subject: [Python-Dev] python -U fails
Message-ID: <200009071646.SAA07004@python.inrialpes.fr>

Seen on c.l.py (import site fails due to eval on an unicode string):

~/python/Python-2.0b1>python -U
'import site' failed; use -v for traceback
Python 2.0b1 (#2, Sep  7 2000, 12:59:53) 
[GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> eval (u"1+2")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: eval() argument 1 must be string or code object
>>> 

The offending eval is in os.py

Traceback (most recent call last):
  File "./Lib/site.py", line 60, in ?
    import sys, os
  File "./Lib/os.py", line 331, in ?
    if _exists("fork") and not _exists("spawnv") and _exists("execv"):
  File "./Lib/os.py", line 325, in _exists
    eval(name)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From akuchlin at mems-exchange.org  Thu Sep  7 22:01:44 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 07 Sep 2000 16:01:44 -0400
Subject: [Python-Dev] hasattr() and Unicode strings
Message-ID: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>

hasattr(), getattr(), and doubtless other built-in functions
don't accept Unicode strings at all:

>>> import sys
>>> hasattr(sys, u'abc')
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: hasattr, argument 2: expected string, unicode found

Is this a bug or a feature?  I'd say bug; the Unicode should be
coerced using the default ASCII encoding, and an exception raised if
that isn't possible.

--amk



From fdrake at beopen.com  Thu Sep  7 22:02:52 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 7 Sep 2000 16:02:52 -0400 (EDT)
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
Message-ID: <14775.62572.442732.589738@cj42289-a.reston1.va.home.com>

Andrew Kuchling writes:
 > Is this a bug or a feature?  I'd say bug; the Unicode should be
 > coerced using the default ASCII encoding, and an exception raised if
 > that isn't possible.

  I agree.
  Marc-Andre, what do you think?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From martin at loewis.home.cs.tu-berlin.de  Thu Sep  7 22:08:45 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 7 Sep 2000 22:08:45 +0200
Subject: [Python-Dev] xml missing in Windows installer?
Message-ID: <200009072008.WAA00862@loewis.home.cs.tu-berlin.de>

Using the 2.0b1 Windows installer from BeOpen, I could not find
Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
this intentional? Did I miss something?

Regargds,
Martin




From effbot at telia.com  Thu Sep  7 22:25:02 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 22:25:02 +0200
Subject: [Python-Dev] xml missing in Windows installer?
References: <200009072008.WAA00862@loewis.home.cs.tu-berlin.de>
Message-ID: <004c01c01909$b832a220$766940d5@hagrid>

martin wrote:

> Using the 2.0b1 Windows installer from BeOpen, I could not find
> Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
> this intentional? Did I miss something?

Date: Thu, 7 Sep 2000 01:34:04 -0700
From: Tim Peters <tim_one at users.sourceforge.net>
To: python-checkins at python.org
Subject: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.15,1.16

Update of /cvsroot/python/python/dist/src/PCbuild
In directory slayer.i.sourceforge.net:/tmp/cvs-serv31884

Modified Files:
 python20.wse 
Log Message:
Windows installer, reflecting changes that went into a replacement 2.0b1
.exe that will show up on PythonLabs.com later today:
    Include the Lib\xml\ package (directory + subdirectories).
    Include the Lib\lib-old\ directory.
    Include the Lib\test\*.xml test cases (well, just one now).
    Remove the redundant install of Lib\*.py (looks like a stray duplicate
        line that's been there a long time).  Because of this, the new
        installer is a little smaller despite having more stuff in it.

...

</F>




From guido at beopen.com  Thu Sep  7 23:32:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 16:32:16 -0500
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: Your message of "Thu, 07 Sep 2000 16:01:44 -0400."
             <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> 
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> 
Message-ID: <200009072132.QAA10047@cj20424-a.reston1.va.home.com>

> hasattr(), getattr(), and doubtless other built-in functions
> don't accept Unicode strings at all:
> 
> >>> import sys
> >>> hasattr(sys, u'abc')
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> TypeError: hasattr, argument 2: expected string, unicode found
> 
> Is this a bug or a feature?  I'd say bug; the Unicode should be
> coerced using the default ASCII encoding, and an exception raised if
> that isn't possible.

Agreed.

There are probably a bunch of things that need to be changed before
thois works though; getattr() c.s. require a string, then call
PyObject_GetAttr() which also checks for a string unless the object
supports tp_getattro -- but that's only true for classes and
instances.

Also, should we convert the string to 8-bit, or should we allow
Unicode attribute names?

It seems there's no easy fix -- better address this after 2.0 is
released.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Sep  7 22:26:28 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 7 Sep 2000 22:26:28 +0200
Subject: [Python-Dev] Naming of config.h
Message-ID: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>

The fact that Python installs its config.h as
<prefix>/python2.0/config.h is annoying if one tries to combine Python
with some other autoconfiscated package.

If you configure that other package, it detects that it needs to add
-I/usr/local/include/python2.0; it also provides its own
config.h. When compiling the files

#include "config.h"

could then mean either one or the other. That can cause quite some
confusion: if the one of the package is used, LONG_LONG might not
exist, even though it should on that port.

This issue can be relaxed by renaming the "config.h" to
"pyconfig.h". That still might result in duplicate defines, but likely
SIZE_FLOAT (for example) has the same value in all definitions.

Regards,
Martin




From gstein at lyra.org  Thu Sep  7 22:41:12 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 7 Sep 2000 13:41:12 -0700
Subject: [Python-Dev] Naming of config.h
In-Reply-To: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>; from martin@loewis.home.cs.tu-berlin.de on Thu, Sep 07, 2000 at 10:26:28PM +0200
References: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>
Message-ID: <20000907134112.W3278@lyra.org>

On Thu, Sep 07, 2000 at 10:26:28PM +0200, Martin v. Loewis wrote:
>...
> This issue can be relaxed by renaming the "config.h" to
> "pyconfig.h". That still might result in duplicate defines, but likely
> SIZE_FLOAT (for example) has the same value in all definitions.

This is not a simple problem. APR (a subcomponent of Apache) is set up to
build as an independent library. It is also autoconf'd, but it goes through
a *TON* of work to avoid passing any autoconf symbols into the public space.

Renaming the config.h file would be an interesting start, but it won't solve
the conflicting symbols (or typedefs!) problem. And from a portability
standpoint, that is important: some compilers don't like redefinitions, even
if they are the same.

IOW, if you want to make this "correct", then plan on setting aside a good
chunk of time.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From guido at beopen.com  Thu Sep  7 23:57:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 16:57:39 -0500
Subject: [Python-Dev] newimp.py
In-Reply-To: Your message of "Thu, 07 Sep 2000 17:59:40 +0200."
             <200009071559.RAA06832@python.inrialpes.fr> 
References: <200009071559.RAA06832@python.inrialpes.fr> 
Message-ID: <200009072157.QAA10441@cj20424-a.reston1.va.home.com>

> No. There's a .cvsignore file in the root directory of the latest
> tarball, though. Not a big deal.

Typically we leave all the .cvsignore files in.  They don't hurt
anybody, and getting rid of them manually is just a pain.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Thu Sep  7 23:27:03 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 7 Sep 2000 17:27:03 -0400
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: <200009072132.QAA10047@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 07, 2000 at 04:32:16PM -0500
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <200009072132.QAA10047@cj20424-a.reston1.va.home.com>
Message-ID: <20000907172703.A1095@kronos.cnri.reston.va.us>

On Thu, Sep 07, 2000 at 04:32:16PM -0500, Guido van Rossum wrote:
>It seems there's no easy fix -- better address this after 2.0 is
>released.

OK; I'll file a bug report on SourceForge so this doesn't get forgotten.

--amk



From fdrake at beopen.com  Thu Sep  7 23:26:18 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 7 Sep 2000 17:26:18 -0400 (EDT)
Subject: [Python-Dev] New PDF documentation & Windows installer
Message-ID: <14776.2042.985615.611778@cj42289-a.reston1.va.home.com>

  As many people noticed, there was a problem with the PDF files
generated for the recent Python 2.0b1 release.  I've found & corrected
the problem, and uploaded new packages to the Web site.  Please get
new PDF files from:

	http://www.pythonlabs.com/tech/python2.0/download.html

  The new files show a date of September 7, 2000, rather than
September 5, 2000.
  An updated Windows installer is available which actually installs
the XML package.
  I'm sorry for any inconvenience these problems have caused.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Thu Sep  7 23:43:28 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 23:43:28 +0200
Subject: [Python-Dev] update: tkinter problems on win95
Message-ID: <004101c01914$ae501ca0$766940d5@hagrid>

just fyi, I've now reduced the problem to two small C programs:
one program initializes Tcl and Tk in the same way as Tkinter --
and the program hangs in the same way as Tkinter (most likely
inside some finalization code that's called from DllMain).

the other does things in the same way as wish, and it never
hangs...

:::

still haven't figured out exactly what's different, but it's clearly
a problem with _tkinter's initialization code, and nothing else.  I'll
post a patch as soon as I have one...

</F>




From barry at scottb.demon.co.uk  Fri Sep  8 01:02:32 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 8 Sep 2000 00:02:32 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <004c01c01909$b832a220$766940d5@hagrid>
Message-ID: <000901c0191f$b48d65e0$060210ac@private>

Please don't release new kits with identical names/versions as old kits.

How do you expect anyone to tell if they have the fix or not?

Finding and fixing bugs show you care about quality.
Stealth releases negate the benefit.

	Barry


> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Fredrik Lundh
> Sent: 07 September 2000 21:25
> To: Martin v. Loewis
> Cc: python-dev at python.org
> Subject: Re: [Python-Dev] xml missing in Windows installer?
> 
> 
> martin wrote:
> 
> > Using the 2.0b1 Windows installer from BeOpen, I could not find
> > Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
> > this intentional? Did I miss something?
> 
> Date: Thu, 7 Sep 2000 01:34:04 -0700
> From: Tim Peters <tim_one at users.sourceforge.net>
> To: python-checkins at python.org
> Subject: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.15,1.16
> 
> Update of /cvsroot/python/python/dist/src/PCbuild
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv31884
> 
> Modified Files:
>  python20.wse 
> Log Message:
> Windows installer, reflecting changes that went into a replacement 2.0b1
> .exe that will show up on PythonLabs.com later today:
>     Include the Lib\xml\ package (directory + subdirectories).
>     Include the Lib\lib-old\ directory.
>     Include the Lib\test\*.xml test cases (well, just one now).
>     Remove the redundant install of Lib\*.py (looks like a stray duplicate
>         line that's been there a long time).  Because of this, the new
>         installer is a little smaller despite having more stuff in it.
> 
> ...
> 
> </F>
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From gward at mems-exchange.org  Fri Sep  8 01:16:56 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 7 Sep 2000 19:16:56 -0400
Subject: [Python-Dev] Noisy test_gc
Message-ID: <20000907191655.A9664@ludwig.cnri.reston.va.us>

Just built 2.0b1, and noticed that the GC test script is rather noisy:

  ...
  test_gc
  gc: collectable <list 0x818cf54>
  gc: collectable <dictionary 0x822f8b4>
  gc: collectable <list 0x818cf54>
  gc: collectable <tuple 0x822f484>
  gc: collectable <class 0x822f8b4>
  gc: collectable <dictionary 0x822f8e4>
  gc: collectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fb6c>
  gc: collectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fb9c>
  gc: collectable <instance method 0x81432bc>
  gc: collectable <B instance at 0x822f0d4>
  gc: collectable <dictionary 0x822fc9c>
  gc: uncollectable <dictionary 0x822fc34>
  gc: uncollectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fbcc>
  gc: collectable <function 0x8230fb4>
  test_gdbm
  ...

which is the same as it was the last time I built from CVS, but I would
have thought this should go away for a real release...

        Greg



From guido at beopen.com  Fri Sep  8 03:07:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 20:07:58 -0500
Subject: [Python-Dev] GPL license issues hit Linux Today
Message-ID: <200009080107.UAA11841@cj20424-a.reston1.va.home.com>

http://linuxtoday.com/news_story.php3?ltsn=2000-09-07-001-21-OS-CY-DB

Plus my response

http://linuxtoday.com/news_story.php3?ltsn=2000-09-07-011-21-OS-CY-SW

I'll be off until Monday, relaxing at the beach!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 02:14:07 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 02:14:07 +0200 (CEST)
Subject: [Python-Dev] Noisy test_gc
In-Reply-To: <20000907191655.A9664@ludwig.cnri.reston.va.us> from "Greg Ward" at Sep 07, 2000 07:16:56 PM
Message-ID: <200009080014.CAA07599@python.inrialpes.fr>

Greg Ward wrote:
> 
> Just built 2.0b1, and noticed that the GC test script is rather noisy:

The GC patch at SF makes it silent. It will be fixed for the final release.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at python.net  Fri Sep  8 04:40:07 2000
From: gward at python.net (Greg Ward)
Date: Thu, 7 Sep 2000 22:40:07 -0400
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
Message-ID: <20000907224007.A959@beelzebub>

Hey all --

this is a bug I noticed in 1.5.2 ages ago, and never investigated
further.  I've just figured it out a little bit more; right now I can
only verify it in 1.5, as I don't have the right sort of 1.6 or 2.0
installation at home.  So if this has been fixed, I'll just shut up.

Bottom line: if you have an installation where prefix != exec-prefix,
and there is another Python installation on the system, then Python
screws up finding the landmark file (string.py in 1.5.2) and computes
the wrong prefix and exec-prefix.

Here's the scenario: I have a Red Hat 6.2 installation with the
"official" Red Hat python in /usr/bin/python.  I have a local build
installed with prefix=/usr/local/python and
exec-prefix=/usr/local/python.i86-linux; /usr/local/bin/python is a
symlink to ../python.i86-linux/bin/python.  (This dates to my days of
trying to understand what gets installed where.  Now, of course, I could
tell you what Python installs where in my sleep with one hand tied
behind my back... ;-)

Witness:
  $ /usr/bin/python -c "import sys ; print sys.prefix"
  /usr
  $/usr/local/bin/python -c "import sys ; print sys.prefix"
  /usr

...even though /usr/local/bin/python's library is really in
/usr/local/python/lib/python1.5 and
/usr/local/python.i86-linux/lib/python1.5.

If I erase Red Hat's Python, then /usr/local/bin/python figures out its
prefix correctly.

Using "strace" sheds a little more light on things; here's what I get
after massaging the "strace" output a bit (grep for "string.py"; all
that shows up are 'stat()' calls, where only the last succeeds; I've
stripped out everything but the filename):

  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.pyc
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.pyc
  /usr/local/bin/../lib/python1.5/string.py
  /usr/local/bin/../lib/python1.5/string.pyc
  /usr/local/bin/lib/python1.5/string.py
  /usr/local/bin/lib/python1.5/string.pyc
  /usr/local/lib/python1.5/string.py
  /usr/local/lib/python1.5/string.pyc
  /usr/lib/python1.5/string.py                # success because of Red Hat's
                                              # Python installation

Well, of course.  Python doesn't know what its true prefix is until it
has found its landmark file, but it can't find its landmark until it
knows its true prefix.  Here's the "strace" output after erasing Red
Hat's Python RPM:

  $ strace /usr/local/bin/python -c 1 2>&1 | grep 'string\.py'
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.pyc
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.pyc
  /usr/local/bin/../lib/python1.5/string.py
  /usr/local/bin/../lib/python1.5/string.pyc
  /usr/local/bin/lib/python1.5/string.py
  /usr/local/bin/lib/python1.5/string.pyc
  /usr/local/lib/python1.5/string.py
  /usr/local/lib/python1.5/string.pyc
  /usr/lib/python1.5/string.py               # now fail since I removed 
  /usr/lib/python1.5/string.pyc              # Red Hat's RPM
  /usr/local/python/lib/python1.5/string.py

A-ha!  When the /usr installation is no longer there to fool it, Python
then looks in the right place.

So, has this bug been fixed in 1.6 or 2.0?  If not, where do I look?

        Greg

PS. what about hard-coding a prefix and exec-prefix in the binary, and
only searching for the landmark if the hard-coded values fail?  That
way, this complicated and expensive search is only done if the
installation has been relocated.

-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From jeremy at beopen.com  Fri Sep  8 05:13:09 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 7 Sep 2000 23:13:09 -0400 (EDT)
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
In-Reply-To: <20000907224007.A959@beelzebub>
References: <20000907224007.A959@beelzebub>
Message-ID: <14776.22853.316652.994320@bitdiddle.concentric.net>

>>>>> "GW" == Greg Ward <gward at python.net> writes:

  GW> PS. what about hard-coding a prefix and exec-prefix in the
  GW> binary, and only searching for the landmark if the hard-coded
  GW> values fail?  That way, this complicated and expensive search is
  GW> only done if the installation has been relocated.

I've tried not to understand much about the search process.  I know
that it is slow (relatively speaking) and that it can be avoided by
setting the PYTHONHOME environment variable.

Jeremy



From MarkH at ActiveState.com  Fri Sep  8 06:02:07 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 8 Sep 2000 15:02:07 +1100
Subject: [Python-Dev] win32all-133 for Python 1.6, and win32all-134 for Python 2.0
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEGJDIAA.MarkH@ActiveState.com>

FYI - I'm updating the starship pages, and will make an announcement to the
newsgroup soon.

But in the meantime, some advance notice:

* All new win32all builds will be released from
http://www.activestate.com/Products/ActivePython/win32all.html.  This is
good for me - ActiveState actually have paid systems guys :-)
win32all-133.exe for 1.6b1 and 1.6 final can be found there.

* win32all-134.exe for the Python 2.x betas is not yet referenced at that
page, but is at
www.activestate.com/download/ActivePython/windows/win32all/win32all-134.exe

If you have ActivePython, you do _not_ need win32all.

Please let me know if you have any problems, or any other questions
regarding this...

Thanks,

Mark.


_______________________________________________
win32-reg-users maillist  -  win32-reg-users at pythonpros.com
http://mailman.pythonpros.com/mailman/listinfo/win32-reg-users




From tim_one at email.msn.com  Fri Sep  8 09:45:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 8 Sep 2000 03:45:14 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000901c0191f$b48d65e0$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENFHEAA.tim_one@email.msn.com>

[Barry Scott]
> Please don't release new kits with identical names/versions as old kits.

It *is* the 2.0b1 release; the only difference is that two of the 2.0b1 Lib
sub-directories that got left out by mistake got included.  This is
repairing an error in the release process, not in the code.

> How do you expect anyone to tell if they have the fix or not?

If they have Lib\xml, they've got the repaired release.  Else they've got
the flawed one.  They can also tell from Python's startup line:

C:\Python20>python
Python 2.0b1 (#4, Sep  7 2000, 02:40:55) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>>

The "#4" and the timestamp say that's the repaired release.  The flawed
release has "#3" there and an earlier timestamp.  If someone is still
incompetent to tell the difference <wink>, they can look at the installer
file size.

> Finding and fixing bugs show you care about quality.
> Stealth releases negate the benefit.

'Twasn't meant to be a "stealth release":  that's *another* screwup!  The
webmaster  didn't get the explanation onto the download page yet, for
reasons beyond his control.  Fred Drake *did* manage to update the
installer, and that was the most important part.  The explanation will show
up ... beats me, ask CNRI <wink>.





From mal at lemburg.com  Fri Sep  8 13:47:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 13:47:08 +0200
Subject: [Python-Dev] python -U fails
References: <200009071646.SAA07004@python.inrialpes.fr>
Message-ID: <39B8D1BC.9B46E005@lemburg.com>

Vladimir Marangozov wrote:
> 
> Seen on c.l.py (import site fails due to eval on an unicode string):
> 
> ~/python/Python-2.0b1>python -U
> 'import site' failed; use -v for traceback
> Python 2.0b1 (#2, Sep  7 2000, 12:59:53)
> [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> eval (u"1+2")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> TypeError: eval() argument 1 must be string or code object
> >>>
> 
> The offending eval is in os.py
> 
> Traceback (most recent call last):
>   File "./Lib/site.py", line 60, in ?
>     import sys, os
>   File "./Lib/os.py", line 331, in ?
>     if _exists("fork") and not _exists("spawnv") and _exists("execv"):
>   File "./Lib/os.py", line 325, in _exists
>     eval(name)

Note that many thing fail when Python is started with -U... that
switch was introduced to be able to get an idea of which parts
of the standard fail to work in a mixed string/Unicode environment.

In the above case, I guess the eval() could be replaced by some
other logic which does a try: except NameError: check.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  8 14:02:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 14:02:46 +0200
Subject: [Python-Dev] hasattr() and Unicode strings
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <14775.62572.442732.589738@cj42289-a.reston1.va.home.com>
Message-ID: <39B8D566.4011E433@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
> Andrew Kuchling writes:
>  > Is this a bug or a feature?  I'd say bug; the Unicode should be
>  > coerced using the default ASCII encoding, and an exception raised if
>  > that isn't possible.
> 
>   I agree.
>   Marc-Andre, what do you think?

Sounds ok to me.

The only question is where to apply the patch:
1. in hasattr()
2. in PyObject_GetAttr()

I'd opt for using the second solution (it should allow string
and Unicode objects as attribute name). hasattr() would then
have to be changed to use the "O" parser marker.

What do you think ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  8 14:09:03 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 14:09:03 +0200
Subject: [Python-Dev] hasattr() and Unicode strings
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <200009072132.QAA10047@cj20424-a.reston1.va.home.com>
Message-ID: <39B8D6DF.AA11746D@lemburg.com>

Guido van Rossum wrote:
> 
> > hasattr(), getattr(), and doubtless other built-in functions
> > don't accept Unicode strings at all:
> >
> > >>> import sys
> > >>> hasattr(sys, u'abc')
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > TypeError: hasattr, argument 2: expected string, unicode found
> >
> > Is this a bug or a feature?  I'd say bug; the Unicode should be
> > coerced using the default ASCII encoding, and an exception raised if
> > that isn't possible.
> 
> Agreed.
> 
> There are probably a bunch of things that need to be changed before
> thois works though; getattr() c.s. require a string, then call
> PyObject_GetAttr() which also checks for a string unless the object
> supports tp_getattro -- but that's only true for classes and
> instances.
> 
> Also, should we convert the string to 8-bit, or should we allow
> Unicode attribute names?

Attribute names will have to be 8-bit strings (at least in 2.0).

The reason here is that attributes are normally Python identifiers
which are plain ASCII and stored as 8-bit strings in the namespace
dictionaries, i.e. there's no way to add Unicode attribute names
other than by assigning directly to __dict__.

Note that keyword lookups already automatically convert Unicode
lookup strings to 8-bit using the default encoding. The same should
happen here, IMHO.
 
> It seems there's no easy fix -- better address this after 2.0 is
> released.

Why wait for 2.1 ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 14:24:49 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 14:24:49 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14769.15402.630192.4454@beluga.mojam.com> from "Skip Montanaro" at Sep 02, 2000 12:43:06 PM
Message-ID: <200009081224.OAA08999@python.inrialpes.fr>

Skip Montanaro wrote:
> 
>     Vlad> Skip Montanaro wrote:
>     >> 
>     >> If I read my (patched) version of gcmodule.c correctly, with the
>     >> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not
>     >> just the stuff with __del__ methods.
> 
>     Vlad> Yes. And you don't know which objects are collectable and which
>     Vlad> ones are not by this collector. That is, SAVEALL transforms the
>     Vlad> collector in a cycle detector. 
> 
> Which is precisely what I want.

All right! Since I haven't seen any votes, here's a +1. I'm willing
to handle Neil's patch at SF and let it in after some minor cleanup
that we'll discuss on the patch manager.

Any objections or other opinions on this?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at mems-exchange.org  Fri Sep  8 14:59:30 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 8 Sep 2000 08:59:30 -0400
Subject: [Python-Dev] Setup script for Tools/compiler (etc.)
Message-ID: <20000908085930.A15918@ludwig.cnri.reston.va.us>

Jeremy --

it seems to me that there ought to be a setup script in Tools/compiler;
it may not be part of the standard library, but at least it ought to
support the standard installation scheme.

So here it is:

  #!/usr/bin/env python

  from distutils.core import setup

  setup(name = "compiler",
        version = "?",
        author = "Jeremy Hylton",
        author_email = "jeremy at beopen.com",
        packages = ["compiler"])

Do you want to check it in or shall I?  ;-)

Also -- and this is the reason I cc'd python-dev -- there are probably
other useful hacks in Tools that should have setup scripts.  I'm
thinking most prominently of IDLE; as near as I can tell, the only way
to install IDLE is to manually copy Tools/idle/*.py to
<prefix>/lib/python{1.6,2.0}/site-packages/idle and then write a little
shell script to launch it for you, eg:

  #!/bin/sh
  # GPW 2000/07/10 ("strongly inspired" by Red Hat's IDLE script ;-)
  exec /depot/plat/packages/python-2.0b1/bin/python \
    /depot/plat/packages/python-2.0b1/lib/python2.0/site-packages/idle/idle.py $*

This is, of course, completely BOGUS!  Users should not have to write
shell scripts just to install and run IDLE in a sensible way.  I would
be happy to write a setup script that makes it easy to install
Tools/idle as a "third-party" module distribution, complete with a
launch script, if there's interest.  Oh hell, maybe I'll do it
anyways... just howl if you don't think I should check it in.

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 15:47:08 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 15:47:08 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
Message-ID: <200009081347.PAA13686@python.inrialpes.fr>

Seems like people are very surprised to see "print >> None" defaulting
to "print >> sys.stderr". I must confess that now that I'm looking at
it and after reading the PEP, this change lacks some argumentation.

In Python, this form surely looks & feels like the Unix cat /dev/null,
that is, since None doesn't have a 'write' method, the print statement
is expected to either raise an exception or be specialized for None to mean
"the print statement has no effect". The deliberate choice of sys.stderr
is not obvious.

I understand that Guido wanted to say "print >> None, args == print args"
and simplify the script logic, but using None in this case seems like a
bad spelling <wink>.

I have certainly carefully avoided any debates on the issue as I don't
see myself using this feature any time soon, but when I see on c.l.py
reactions of surprise on weakly argumented/documented features and I
kind of feel the same way, I'd better ask for more arguments here myself.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at mems-exchange.org  Fri Sep  8 16:14:26 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 8 Sep 2000 10:14:26 -0400
Subject: [Python-Dev] Distutil-ized IDLE
In-Reply-To: <20000908085930.A15918@ludwig.cnri.reston.va.us>; from gward@mems-exchange.org on Fri, Sep 08, 2000 at 08:59:30AM -0400
References: <20000908085930.A15918@ludwig.cnri.reston.va.us>
Message-ID: <20000908101426.A16014@ludwig.cnri.reston.va.us>

On 08 September 2000, I said:
> I would be happy to write a setup script that makes it easy to install
> Tools/idle as a "third-party" module distribution, complete with a
> launch script, if there's interest.  Oh hell, maybe I'll do it
> anyways... just howl if you don't think I should check it in.

OK, as threatened, I've written a setup script for IDLE.  (Specifically,
the version in Tools/idle in the Python 1.6 and 2.0 source
distributions.)  This installs IDLE into a pacakge "idle", which means
that the imports in idle.py have to change.  Rather than change idle.py,
I wrote a new script just called "idle"; this would replace idle.py and
be installed in <prefix>/bin (on Unix -- I think scripts installed by
the Distutils go to <prefix>/Scripts on Windows, which was a largely
arbitrary choice).

Anyways, here's the setup script:

  #!/usr/bin/env python

  import os
  from distutils.core import setup
  from distutils.command.install_data import install_data

  class IDLE_install_data (install_data):
      def finalize_options (self):
          if self.install_dir is None:
              install_lib = self.get_finalized_command('install_lib')
              self.install_dir = os.path.join(install_lib.install_dir, "idle")

  setup(name = "IDLE",
        version = "0.6",
        author = "Guido van Rossum",
        author_email = "guido at python.org",
        cmdclass = {'install_data': IDLE_install_data},
        packages = ['idle'],
        package_dir = {'idle': ''},
        scripts = ['idle'],
        data_files = ['config.txt', 'config-unix.txt', 'config-win.txt'])

And the changes I suggest to make IDLE smoothly installable:
  * remove idle.py 
  * add this setup.py and idle (which is just idle.py with the imports
    changed)
  * add some instructions on how to install and run IDLE somewhere

I just checked the CVS repository for the IDLE fork, and don't see a
setup.py there either -- so presumably the forked IDLE could benefit
from this as well (hence the cc: idle-dev at python.org).

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From mal at lemburg.com  Fri Sep  8 16:30:37 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 16:30:37 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
Message-ID: <39B8F80D.FF9CBAA9@lemburg.com>

Vladimir Marangozov wrote:
> 
> Seems like people are very surprised to see "print >> None" defaulting
> to "print >> sys.stderr". I must confess that now that I'm looking at
> it and after reading the PEP, this change lacks some argumentation.

According to the PEP it defaults to sys.stdout with the effect of
working just like the plain old "print" statement.

> In Python, this form surely looks & feels like the Unix cat /dev/null,
> that is, since None doesn't have a 'write' method, the print statement
> is expected to either raise an exception or be specialized for None to mean
> "the print statement has no effect". The deliberate choice of sys.stderr
> is not obvious.
> 
> I understand that Guido wanted to say "print >> None, args == print args"
> and simplify the script logic, but using None in this case seems like a
> bad spelling <wink>.
> 
> I have certainly carefully avoided any debates on the issue as I don't
> see myself using this feature any time soon, but when I see on c.l.py
> reactions of surprise on weakly argumented/documented features and I
> kind of feel the same way, I'd better ask for more arguments here myself.

+1

I'd opt for raising an exception instead of magically using
sys.stdout just to avoid two lines of explicit defaulting to
sys.stdout (see the example in the PEP).

BTW, I noted that the PEP pages on SF are not up-to-date. The
PEP 214 doesn't have the comments which Guido added in support
of the proposal.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Fri Sep  8 16:49:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 8 Sep 2000 10:49:59 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39B8F80D.FF9CBAA9@lemburg.com>
References: <200009081347.PAA13686@python.inrialpes.fr>
	<39B8F80D.FF9CBAA9@lemburg.com>
Message-ID: <14776.64663.617863.830703@cj42289-a.reston1.va.home.com>

M.-A. Lemburg writes:
 > BTW, I noted that the PEP pages on SF are not up-to-date. The
 > PEP 214 doesn't have the comments which Guido added in support
 > of the proposal.

  I just pushed new copies up to SF using the CVS versions.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Fri Sep  8 17:00:46 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 11:00:46 -0400 (EDT)
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
References: <20000907224007.A959@beelzebub>
Message-ID: <14776.65310.93934.482038@anthem.concentric.net>

Greg,

The place to look for the search algorithm is in Modules/getpath.c.
There's an extensive comment at the top of the file outlining the
algorithm.

In fact $PREFIX and $EXEC_PREFIX are used, but only as fallbacks.

-Barry



From skip at mojam.com  Fri Sep  8 17:00:38 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 8 Sep 2000 10:00:38 -0500 (CDT)
Subject: [Python-Dev] Re: [Bug #113811] Python 2.0 beta 1 -- urllib.urlopen() fails
In-Reply-To: <003601c0194e$916012f0$74eb0b18@C322162A>
References: <14776.4972.263490.780783@beluga.mojam.com>
	<003601c0194e$916012f0$74eb0b18@C322162A>
Message-ID: <14776.65302.599381.987636@beluga.mojam.com>

    Bob> The one I used was http://dreamcast.ign.com/review_lists/a.html,
    Bob> but probably any would do since it's pretty ordinary, and the error
    Bob> occurs before making any contact with the destination.

    Bob> By the way, I forgot to mention that I'm running under Windows 2000.

Bob,

Thanks for the input.  I asked for a URL because I thought it unlikely
something common would trigger a bug.  After all, urllib.urlopen is probably
one of the most frequently used Internet-related calls in Python.

I can't reproduce this on my Linux system:

    % ./python
    Python 2.0b1 (#6, Sep  7 2000, 21:03:08) 
    [GCC 2.95.3 19991030 (prerelease)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> import urllib
    >>> f = urllib.urlopen("http://dreamcast.ign.com/review_lists/a.html")
    >>> data = f.read()
    >>> len(data)

Perhaps one of the folks on python-dev that run Windows of some flavor can
reproduce the problem.  Can you give me a simple session transcript like the
above that fails for you?  I will see about adding a test to the urllib
regression test.

-- 
Skip Montanaro (skip at mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/



From bwarsaw at beopen.com  Fri Sep  8 17:27:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 11:27:24 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
Message-ID: <14777.1372.641371.803126@anthem.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:

    VM> Seems like people are very surprised to see "print >> None"
    VM> defaulting to "print >> sys.stderr". I must confess that now
    VM> that I'm looking at it and after reading the PEP, this change
    VM> lacks some argumentation.

sys.stdout, not stderr.

I was pretty solidly -0 on this extension, but Guido wanted it (and
even supplied the necessary patch!).  It tastes too magical to me,
for exactly the same reasons you describe.

I hadn't thought of the None == /dev/null equivalence, but that's a
better idea, IMO.  In fact, perhaps the printing could be optimized
away when None is used (although you'd lose any side-effects there
might be).  This would actually make extended print more useful
because if you used

    print >> logfile

everywhere, you'd only need to start passing in logfile=None to
disable printing.  OTOH, it's not to hard to use

    class Devnull:
        def write(self, msg): pass
	

logfile=Devnull()

We'll have to wait until after the weekend for Guido's pronouncement.

-Barry





From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 18:23:13 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST)
Subject: [Python-Dev] 2.0 Optimization & speed
Message-ID: <200009081623.SAA14090@python.inrialpes.fr>

Continuing my impressions on the user's feedback to date: Donn Cave
& MAL are at least two voices I've heard about an overall slowdown
of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
this slowdown comes from and I believe that we have only vague guesses
about the possible causes: unicode database, more opcodes in ceval, etc.

I wonder whether we are in a position to try improving Python's
performance with some `wise quickies' in a next beta. But this raises
a more fundamental question on what is our margin for manoeuvres at this
point. This in turn implies that we need some classification of the
proposed optimizations to date.

Perhaps it would be good to create a dedicated Web page for this, but
in the meantime, let's try to build a list/table of the ideas that have
been proposed so far. This would be useful anyway, and the list would be
filled as time goes.

Trying to push this initiative one step further, here's a very rough start
on the top of my head:

Category 1: Algorithmic Changes

These are the most promising, since they don't relate to pure technicalities
but imply potential improvements with some evidence.
I'd put in this category:

- the dynamic dictionary/string specialization by Fred Drake
  (this is already in). Can this be applied in other areas? If so, where?

- the Python-specific mallocs. Actually, I'm pretty sure that a lot of
  `overhead' is due to the standard mallocs which happen to be expensive
  for Python in both space and time. Python is very malloc-intensive.
  The only reason I've postponed my obmalloc patch is that I still haven't
  provided an interface which allows evaluating it's impact on the
  mem size consumption. It gives noticeable speedup on all machines, so
  it accounts as a good candidate w.r.t. performance.

- ??? (maybe some parts of MAL's optimizations could go here)

Category 2: Technical / Code optimizations

This category includes all (more or less) controversial proposals, like

- my latest lookdict optimizations (a typical controversial `quickie')

- opcode folding & reordering. Actually, I'm unclear on why Guido
  postponed the reordering idea; it has received positive feedback
  and all theoretical reasoning and practical experiments showed that
  this "could" help, although without any guarantees. Nobody reported
  slowdowns, though. This is typically a change without real dangers.

- kill the async / pending calls logic. (Tim, what happened with this
  proposal?)

- compact the unicodedata database, which is expected to reduce the
  mem footprint, maybe improve startup time, etc. (ongoing)

- proposal about optimizing the "file hits" on startup.

- others?

If there are potential `wise quickies', meybe it's good to refresh
them now and experiment a bit more before the final release?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From mwh21 at cam.ac.uk  Fri Sep  8 18:39:58 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Fri, 8 Sep 2000 17:39:58 +0100 (BST)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <Pine.LNX.4.10.10009081736070.29215-100000@localhost.localdomain>

It's 5:30 and I'm still at work (eek!) so for now I'll just say:

On Fri, 8 Sep 2000, Vladimir Marangozov wrote:
[...]
> Category 2: Technical / Code optimizations
[...]
> - others?

Killing off SET_LINENO?

Cheers,
M.





From mal at lemburg.com  Fri Sep  8 18:49:58 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 18:49:58 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <39B918B6.659C6C88@lemburg.com>

Vladimir Marangozov wrote:
> 
> Continuing my impressions on the user's feedback to date: Donn Cave
> & MAL are at least two voices I've heard about an overall slowdown
> of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
> this slowdown comes from and I believe that we have only vague guesses
> about the possible causes: unicode database, more opcodes in ceval, etc.
> 
> I wonder whether we are in a position to try improving Python's
> performance with some `wise quickies' in a next beta.

I don't think it's worth trying to optimize anything in the
beta series: optimizations need to be well tested and therefore
should go into 2.1.

Perhaps we ought to make these optimizations the big new issue
for 2.1...

It would fit well with the move to a more pluggable interpreter
design.

> But this raises
> a more fundamental question on what is our margin for manoeuvres at this
> point. This in turn implies that we need some classification of the
> proposed optimizations to date.
> 
> Perhaps it would be good to create a dedicated Web page for this, but
> in the meantime, let's try to build a list/table of the ideas that have
> been proposed so far. This would be useful anyway, and the list would be
> filled as time goes.
> 
> Trying to push this initiative one step further, here's a very rough start
> on the top of my head:
> 
> Category 1: Algorithmic Changes
> 
> These are the most promising, since they don't relate to pure technicalities
> but imply potential improvements with some evidence.
> I'd put in this category:
> 
> - the dynamic dictionary/string specialization by Fred Drake
>   (this is already in). Can this be applied in other areas? If so, where?
>
> - the Python-specific mallocs. Actually, I'm pretty sure that a lot of
>   `overhead' is due to the standard mallocs which happen to be expensive
>   for Python in both space and time. Python is very malloc-intensive.
>   The only reason I've postponed my obmalloc patch is that I still haven't
>   provided an interface which allows evaluating it's impact on the
>   mem size consumption. It gives noticeable speedup on all machines, so
>   it accounts as a good candidate w.r.t. performance.
> 
> - ??? (maybe some parts of MAL's optimizations could go here)

One addition would be my small dict patch: the dictionary
tables for small dictionaries are added to the dictionary
object itself rather than allocating a separate buffer.
This is useful for small dictionaries (8-16 entries) and
causes a speedup due to the fact that most instance dictionaries
are in fact of that size.
 
> Category 2: Technical / Code optimizations
> 
> This category includes all (more or less) controversial proposals, like
> 
> - my latest lookdict optimizations (a typical controversial `quickie')
> 
> - opcode folding & reordering. Actually, I'm unclear on why Guido
>   postponed the reordering idea; it has received positive feedback
>   and all theoretical reasoning and practical experiments showed that
>   this "could" help, although without any guarantees. Nobody reported
>   slowdowns, though. This is typically a change without real dangers.

Rather than folding opcodes, I'd suggest breaking the huge
switch in two or three parts so that the most commonly used
opcodes fit nicely into the CPU cache.
 
> - kill the async / pending calls logic. (Tim, what happened with this
>   proposal?)

In my patched version of 1.5 I have moved this logic into the
second part of the ceval switch: as a result, signals are only
queried if a less common opcode is used.

> - compact the unicodedata database, which is expected to reduce the
>   mem footprint, maybe improve startup time, etc. (ongoing)

This was postponed to 2.1. It doesn't have any impact on
performance... not even on memory footprint since it is only
loaded on demand by the OS.
 
> - proposal about optimizing the "file hits" on startup.

A major startup speedup can be had by using a smarter
file lookup mechanism. 

Another possibility is freeze()ing the whole standard lib 
and putting it into a shared module. I'm not sure how well
this works with packages, but it did work very well for
1.5.2 (see the mxCGIPython project).
 
> - others?
> 
> If there are potential `wise quickies', meybe it's good to refresh
> them now and experiment a bit more before the final release?

No, let's leave this for 2.1.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Fri Sep  8 19:18:01 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 12:18:01 -0500 (CDT)
Subject: [Python-Dev] obsolete urlopen.py in CVS
Message-ID: <14777.8009.543626.966203@buffalo.fnal.gov>

Another obsolete file has magically appeared in my local CVS
workspace.  I am assuming that I should continue to report these sorts
of problems. If not, just tell me and I'll stop with these annoying
messages.  Is there a mail address for the CVS admin so I don't have
to bug the whole list?

Lib$ cvs status urlopen.py                                             
===================================================================
File: urlopen.py        Status: Up-to-date

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/urlopen.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)




From effbot at telia.com  Fri Sep  8 19:38:07 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 8 Sep 2000 19:38:07 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr> <39B918B6.659C6C88@lemburg.com>
Message-ID: <00e401c019bb$904084a0$766940d5@hagrid>

mal wrote:
> > - compact the unicodedata database, which is expected to reduce the
> >   mem footprint, maybe improve startup time, etc. (ongoing)
> 
> This was postponed to 2.1. It doesn't have any impact on
> performance...

sure has, for anyone distributing python applications.  we're
talking more than 1 meg of extra binary bloat (over 2.5 megs
of extra source code...)

the 2.0 release PEP says:

    Compression of Unicode database - Fredrik Lundh
      SF Patch 100899
      At least for 2.0b1.  May be included in 2.0 as a bug fix.

(the API is frozen, and we have an extensive test suite...)

</F>




From fdrake at beopen.com  Fri Sep  8 19:29:54 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 8 Sep 2000 13:29:54 -0400 (EDT)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <00e401c019bb$904084a0$766940d5@hagrid>
References: <200009081623.SAA14090@python.inrialpes.fr>
	<39B918B6.659C6C88@lemburg.com>
	<00e401c019bb$904084a0$766940d5@hagrid>
Message-ID: <14777.8722.902222.452584@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > (the API is frozen, and we have an extensive test suite...)

  What are the reasons for the hold-up?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Fri Sep  8 19:41:59 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 8 Sep 2000 19:41:59 +0200
Subject: [Python-Dev] obsolete urlopen.py in CVS
References: <14777.8009.543626.966203@buffalo.fnal.gov>
Message-ID: <00ea01c019bc$1929f4e0$766940d5@hagrid>

Charles G Waldman wrote:
> Another obsolete file has magically appeared in my local CVS
> workspace.  I am assuming that I should continue to report these sorts
> of problems. If not, just tell me and I'll stop with these annoying
> messages.

what exactly are you doing to check things out?

note that CVS may check things out from the Attic under
certain circumstances, like if you do "cvs update -D".  see
the CVS FAQ for more info.

</F>




From mal at lemburg.com  Fri Sep  8 19:43:40 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 19:43:40 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr> <39B918B6.659C6C88@lemburg.com> <00e401c019bb$904084a0$766940d5@hagrid>
Message-ID: <39B9254C.5209AC81@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > > - compact the unicodedata database, which is expected to reduce the
> > >   mem footprint, maybe improve startup time, etc. (ongoing)
> >
> > This was postponed to 2.1. It doesn't have any impact on
> > performance...
> 
> sure has, for anyone distributing python applications.  we're
> talking more than 1 meg of extra binary bloat (over 2.5 megs
> of extra source code...)

Yes, but not there's no impact on speed and that's what Valdimir
was referring to.
 
> the 2.0 release PEP says:
> 
>     Compression of Unicode database - Fredrik Lundh
>       SF Patch 100899
>       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> 
> (the API is frozen, and we have an extensive test suite...)

Note that I want to redesign the Unicode database and ctype
access for 2.1: all databases should be accessible through
the unicodedatabase module which will be rewritten as Python
module. 

The real data will then go into auxilliary C modules
as static C data which are managed by the Python module
and loaded on demand. This means that what now is unicodedatabase
will then move into some _unicodedb module.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Fri Sep  8 20:13:48 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 13:13:48 -0500 (CDT)
Subject: [Python-Dev] obsolete urlopen.py in CVS
In-Reply-To: <00ea01c019bc$1929f4e0$766940d5@hagrid>
References: <14777.8009.543626.966203@buffalo.fnal.gov>
	<00ea01c019bc$1929f4e0$766940d5@hagrid>
Message-ID: <14777.11356.106477.440474@buffalo.fnal.gov>

Fredrik Lundh writes:

 > what exactly are you doing to check things out?

cvs update -dAP

 > note that CVS may check things out from the Attic under
 > certain circumstances, like if you do "cvs update -D".  see
 > the CVS FAQ for more info.

No, I am not using the '-D' flag.






From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 21:27:06 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 21:27:06 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <14777.1372.641371.803126@anthem.concentric.net> from "Barry A. Warsaw" at Sep 08, 2000 11:27:24 AM
Message-ID: <200009081927.VAA14502@python.inrialpes.fr>

Barry A. Warsaw wrote:
> 
> 
> >>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:
> 
>     VM> Seems like people are very surprised to see "print >> None"
>     VM> defaulting to "print >> sys.stderr". I must confess that now
>     VM> that I'm looking at it and after reading the PEP, this change
>     VM> lacks some argumentation.
> 
> sys.stdout, not stderr.

typo

> 
> I was pretty solidly -0 on this extension, but Guido wanted it (and
> even supplied the necessary patch!).  It tastes too magical to me,
> for exactly the same reasons you describe.
> 
> I hadn't thought of the None == /dev/null equivalence, but that's a
> better idea, IMO.  In fact, perhaps the printing could be optimized
> away when None is used (although you'd lose any side-effects there
> might be).  This would actually make extended print more useful
> because if you used
> 
>     print >> logfile
> 
> everywhere, you'd only need to start passing in logfile=None to
> disable printing.  OTOH, it's not to hard to use
> 
>     class Devnull:
>         def write(self, msg): pass
> 	
> 
> logfile=Devnull()

In no way different than using a function, say output() or an instance
of a Stream class that can poke at will on file objects, instead of
extended print <0.5 wink>. This is a matter of personal taste, after all.

> 
> We'll have to wait until after the weekend for Guido's pronouncement.
> 

Sure. Note that I don't feel like I'll loose my sleep if this doesn't
change. However, it looks like the None business goes a bit too far here.
In the past, Guido used to label such things "creeping featurism", but
times change... :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From bwarsaw at beopen.com  Fri Sep  8 21:36:01 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 15:36:01 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <14777.1372.641371.803126@anthem.concentric.net>
	<200009081927.VAA14502@python.inrialpes.fr>
Message-ID: <14777.16289.587240.778501@anthem.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:

    VM> Sure. Note that I don't feel like I'll loose my sleep if this
    VM> doesn't change. However, it looks like the None business goes
    VM> a bit too far here.  In the past, Guido used to label such
    VM> things "creeping featurism", but times change... :-)

Agreed.



From mal at lemburg.com  Fri Sep  8 22:26:45 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 22:26:45 +0200
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
References: <200009081702.LAA08275@localhost.localdomain>
		<Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
Message-ID: <39B94B85.BFD16019@lemburg.com>

As you may have heard, there are problems with the stock
XML support and the PyXML project due to both trying to
use the xml package namespace (see the xml-sig for details).

To provide more flexibility to the third-party tools in such
a situation, I think it would be worthwhile moving the
site-packages/ entry in sys.path in front of the lib/python2.0/
entry.

That way a third party tool can override the standard lib's
package or module or take appropriate action to reintegrate
the standard lib's package namespace into an extended one.

What do you think ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 22:48:23 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 22:48:23 +0200 (CEST)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <39B9254C.5209AC81@lemburg.com> from "M.-A. Lemburg" at Sep 08, 2000 07:43:40 PM
Message-ID: <200009082048.WAA14671@python.inrialpes.fr>

M.-A. Lemburg wrote:
> 
> Fredrik Lundh wrote:
> > 
> > mal wrote:
> > > > - compact the unicodedata database, which is expected to reduce the
> > > >   mem footprint, maybe improve startup time, etc. (ongoing)
> > >
> > > This was postponed to 2.1. It doesn't have any impact on
> > > performance...
> > 
> > sure has, for anyone distributing python applications.  we're
> > talking more than 1 meg of extra binary bloat (over 2.5 megs
> > of extra source code...)
> 
> Yes, but not there's no impact on speed and that's what Valdimir
> was referring to.

Hey Marc-Andre, what encoding are you using for printing my name? <wink>

>  
> > the 2.0 release PEP says:
> > 
> >     Compression of Unicode database - Fredrik Lundh
> >       SF Patch 100899
> >       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> > 
> > (the API is frozen, and we have an extensive test suite...)
> 
> Note that I want to redesign the Unicode database and ctype
> access for 2.1: all databases should be accessible through
> the unicodedatabase module which will be rewritten as Python
> module. 
> 
> The real data will then go into auxilliary C modules
> as static C data which are managed by the Python module
> and loaded on demand. This means that what now is unicodedatabase
> will then move into some _unicodedb module.

Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.
My argument doesn't hold, but Fredrik has a point and I don't see how
your future changes would invalidate these efforts. If the size of
the distribution can be reduced, it should be reduced! Did you know
that telecom companies measure the quality of their technologies on
a per bit basis? <0.1 wink> Every bit costs money, and that's why
Van Jacobson packet-header compression has been invented and is
massively used. Whole armies of researchers are currently trying to
compensate the irresponsible bloatware that people of the higher
layers are imposing on them <wink>. Careful!

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From jeremy at beopen.com  Fri Sep  8 22:54:33 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 8 Sep 2000 16:54:33 -0400 (EDT)
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B94B85.BFD16019@lemburg.com>
References: <200009081702.LAA08275@localhost.localdomain>
	<Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com>
	<14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
	<39B94B85.BFD16019@lemburg.com>
Message-ID: <14777.21001.363279.137646@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:

  MAL> To provide more flexibility to the third-party tools in such a
  MAL> situation, I think it would be worthwhile moving the
  MAL> site-packages/ entry in sys.path in front of the lib/python2.0/
  MAL> entry.

  MAL> That way a third party tool can override the standard lib's
  MAL> package or module or take appropriate action to reintegrate the
  MAL> standard lib's package namespace into an extended one.

  MAL> What do you think ?

I think it is a bad idea to encourage third party tools to override
the standard library.  We call it the standard library for a reason!

It invites confusion and headaches to read a bit of code that says
"import pickle" and have its meaning depend on what oddball packages
someone has installed on the system.  Good bye, portability!

If you want to use a third-party package that provides the same
interface as a standard library, it seems much clearn to say so
explicitly.

I would agree that there is an interesting design problem here.  I
think the problem is support interfaces, where an interface allows me
to write code that can run with any implementation of that interface.
I don't think hacking sys.path is a good solution.

Jeremy



From akuchlin at mems-exchange.org  Fri Sep  8 22:52:02 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 8 Sep 2000 16:52:02 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <14777.21001.363279.137646@bitdiddle.concentric.net>; from jeremy@beopen.com on Fri, Sep 08, 2000 at 04:54:33PM -0400
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net>
Message-ID: <20000908165202.F12994@kronos.cnri.reston.va.us>

On Fri, Sep 08, 2000 at 04:54:33PM -0400, Jeremy Hylton wrote:
>It invites confusion and headaches to read a bit of code that says
>"import pickle" and have its meaning depend on what oddball packages
>someone has installed on the system.  Good bye, portability!

Amen.  But then, I was against adding xml/ in the first place...

--amk



From mal at lemburg.com  Fri Sep  8 22:53:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 22:53:32 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009082048.WAA14671@python.inrialpes.fr>
Message-ID: <39B951CC.3C0AE801@lemburg.com>

Vladimir Marangozov wrote:
> 
> M.-A. Lemburg wrote:
> >
> > Fredrik Lundh wrote:
> > >
> > > mal wrote:
> > > > > - compact the unicodedata database, which is expected to reduce the
> > > > >   mem footprint, maybe improve startup time, etc. (ongoing)
> > > >
> > > > This was postponed to 2.1. It doesn't have any impact on
> > > > performance...
> > >
> > > sure has, for anyone distributing python applications.  we're
> > > talking more than 1 meg of extra binary bloat (over 2.5 megs
> > > of extra source code...)
> >
> > Yes, but not there's no impact on speed and that's what Valdimir
> > was referring to.
> 
> Hey Marc-Andre, what encoding are you using for printing my name? <wink>

Yeah, I know... the codec swaps character on an irregular basis
-- gotta fix that ;-)
 
> >
> > > the 2.0 release PEP says:
> > >
> > >     Compression of Unicode database - Fredrik Lundh
> > >       SF Patch 100899
> > >       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> > >
> > > (the API is frozen, and we have an extensive test suite...)
> >
> > Note that I want to redesign the Unicode database and ctype
> > access for 2.1: all databases should be accessible through
> > the unicodedatabase module which will be rewritten as Python
> > module.
> >
> > The real data will then go into auxilliary C modules
> > as static C data which are managed by the Python module
> > and loaded on demand. This means that what now is unicodedatabase
> > will then move into some _unicodedb module.
> 
> Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.

Oh, I didn't try to reduce Fredrik's efforts at all. To the
contrary: I'm still looking forward to his melted down version
of the database and the ctype tables.

The point I wanted to make was that all this can well be
done for 2.1. There are many more urgent things which need
to get settled in the beta cycle. Size optimizations are
not necessarily one of them, IMHO.

> My argument doesn't hold, but Fredrik has a point and I don't see how
> your future changes would invalidate these efforts. If the size of
> the distribution can be reduced, it should be reduced! Did you know
> that telecom companies measure the quality of their technologies on
> a per bit basis? <0.1 wink> Every bit costs money, and that's why
> Van Jacobson packet-header compression has been invented and is
> massively used. Whole armies of researchers are currently trying to
> compensate the irresponsible bloatware that people of the higher
> layers are imposing on them <wink>. Careful!

True, but why the hurry ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Sep  8 22:58:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 8 Sep 2000 16:58:31 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <20000908165202.F12994@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEACHFAA.tim_one@email.msn.com>

[Andrew Kuchling]
> Amen.  But then, I was against adding xml/ in the first place...

So *you're* the guy who sabotaged the Windows installer!  Should have
guessed -- you almost got away with it, too <wink>.





From mal at lemburg.com  Fri Sep  8 23:31:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 23:31:06 +0200
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
References: <200009081702.LAA08275@localhost.localdomain>
		<Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com>
		<14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
		<39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net>
Message-ID: <39B95A9A.D5A01F53@lemburg.com>

Jeremy Hylton wrote:
> 
> >>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:
> 
>   MAL> To provide more flexibility to the third-party tools in such a
>   MAL> situation, I think it would be worthwhile moving the
>   MAL> site-packages/ entry in sys.path in front of the lib/python2.0/
>   MAL> entry.
> 
>   MAL> That way a third party tool can override the standard lib's
>   MAL> package or module or take appropriate action to reintegrate the
>   MAL> standard lib's package namespace into an extended one.
> 
>   MAL> What do you think ?
> 
> I think it is a bad idea to encourage third party tools to override
> the standard library.  We call it the standard library for a reason!
> 
> It invites confusion and headaches to read a bit of code that says
> "import pickle" and have its meaning depend on what oddball packages
> someone has installed on the system.  Good bye, portability!

Ok... so we'll need a more flexible solution.
 
> If you want to use a third-party package that provides the same
> interface as a standard library, it seems much clearn to say so
> explicitly.
> 
> I would agree that there is an interesting design problem here.  I
> think the problem is support interfaces, where an interface allows me
> to write code that can run with any implementation of that interface.
> I don't think hacking sys.path is a good solution.

No, the problem is different: there is currently on way to
automatically add subpackages to an existing package which is
not aware of these new subpackages, i.e. say you have a
package xml in the standard lib and somebody wants to install
a new subpackage wml.

The only way to do this is by putting it into the xml
package directory (bad!) or by telling the user to
run 

	import xml_wml

first which then does the

	import xml, wml
	xml.wml = wml

to complete the installation... there has to be a more elegant
way.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  8 23:48:18 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 23:48:18 +0200
Subject: [Python-Dev] PyObject_SetAttr/GetAttr() and non-string attribute names
Message-ID: <39B95EA2.7D98AA4C@lemburg.com>

While hacking along on a patch to let set|get|hasattr() accept
Unicode attribute names, I found that all current tp_getattro
and tp_setattro implementations (classes, instances, methods) expect
to find string objects as argument and don't even check for this.

Is this documented somewhere ? Should we make the existing
implementations aware of other objects as well ? Should we
fix the de-facto definition to string attribute names ?

My current solution does the latter. It's available as patch
on SF.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Sat Sep  9 00:55:01 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sat, 09 Sep 2000 00:55:01 +0200
Subject: [Python-Dev] Need some hands to debug MacPython installer
Message-ID: <20000908225506.92145D71FF@oratrix.oratrix.nl>

Folks,
I need some people to test the MacPython 2.0b1 installer. It is almost 
complete, only things like the readme file and some of the
documentation (on building and such) remains to be done. At least: as
far as I know. If someone (or someones) could try
ftp://ftp.cwi.nl/pub/jack/python/mac/PythonMac20preb1Installer.bin 
and tell me whether it works that would be much appreciated.
One thing to note is that if you've been building 2.0b1 MacPythons
from the CVS repository you'll have to remove your preference file
first (no such problem with older prefs files).

All feedback is welcome, of course, but I'm especially interested in
hearing which things I've forgotten (if people could check that
expected new modules and such are indeed there), and which bits of the 
documentation (in Mac:Demo) needs massaging. Oh, and bugs of course,
in the unlike event of there being any:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From gstein at lyra.org  Sat Sep  9 01:08:55 2000
From: gstein at lyra.org (Greg Stein)
Date: Fri, 8 Sep 2000 16:08:55 -0700
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B95A9A.D5A01F53@lemburg.com>; from mal@lemburg.com on Fri, Sep 08, 2000 at 11:31:06PM +0200
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net> <39B95A9A.D5A01F53@lemburg.com>
Message-ID: <20000908160855.B16566@lyra.org>

On Fri, Sep 08, 2000 at 11:31:06PM +0200, M.-A. Lemburg wrote:
> Jeremy Hylton wrote:
>...
> > If you want to use a third-party package that provides the same
> > interface as a standard library, it seems much clearn to say so
> > explicitly.
> > 
> > I would agree that there is an interesting design problem here.  I
> > think the problem is support interfaces, where an interface allows me
> > to write code that can run with any implementation of that interface.
> > I don't think hacking sys.path is a good solution.
> 
> No, the problem is different: there is currently on way to
> automatically add subpackages to an existing package which is
> not aware of these new subpackages, i.e. say you have a
> package xml in the standard lib and somebody wants to install
> a new subpackage wml.
> 
> The only way to do this is by putting it into the xml
> package directory (bad!) or by telling the user to
> run 
> 
> 	import xml_wml
> 
> first which then does the
> 
> 	import xml, wml
> 	xml.wml = wml
> 
> to complete the installation... there has to be a more elegant
> way.

There is. I proposed it a while back. Fred chose to use a different
mechanism, despite my recommendations to the contrary. *shrug*

The "current" mechanism require the PyXML package to completely override the
entire xml package in the Python distribution. This has certain, um,
problems... :-)

Another approach would be to use the __path__ symbol. I dislike that for
various import design reasons, but it would solve one of the issues Fred had
with my recommendation (e.g. needing to pre-import subpackages).

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From cgw at fnal.gov  Sat Sep  9 01:41:12 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 18:41:12 -0500 (CDT)
Subject: [Python-Dev] Need some hands to debug MacPython installer
In-Reply-To: <20000908225506.92145D71FF@oratrix.oratrix.nl>
References: <20000908225506.92145D71FF@oratrix.oratrix.nl>
Message-ID: <14777.31000.382351.905418@buffalo.fnal.gov>

Jack Jansen writes:
 > Folks,
 > I need some people to test the MacPython 2.0b1 installer. 

I am not a Mac user but I saw your posting and my wife has a Mac so I
decided to give it a try. 

When I ran the installer, a lot of the text referred to "Python 1.6"
despite this being a 2.0 installer.

As the install completed I got a message:  

 The application "Configure Python" could not be opened because
 "OTInetClientLib -- OTInetGetSecondaryAddresses" could not be found

After that, if I try to bring up PythonIDE or PythonInterpreter by
clicking on the 16-ton icons, I get the same message about
OTInetGetSecondaryAddresses.  So I'm not able to run Python at all
right now on this Mac.



From sdm7g at virginia.edu  Sat Sep  9 02:23:45 2000
From: sdm7g at virginia.edu (Steven D. Majewski)
Date: Fri, 8 Sep 2000 20:23:45 -0400 (EDT)
Subject: [Python-Dev] Re: [Pythonmac-SIG] Need some hands to debug MacPython installer
In-Reply-To: <20000908225506.92145D71FF@oratrix.oratrix.nl>
Message-ID: <Pine.A32.3.90.1000908201956.15033A-100000@elvis.med.Virginia.EDU>

On Sat, 9 Sep 2000, Jack Jansen wrote:

> All feedback is welcome, of course, but I'm especially interested in
> hearing which things I've forgotten (if people could check that
> expected new modules and such are indeed there), and which bits of the 
> documentation (in Mac:Demo) needs massaging. Oh, and bugs of course,
> in the unlike event of there being any:-)

Install went smoothly. I haven't been following the latest developments,
so I'm not sure if this is SUPPOSED to work yet or not, but: 


Python 2.0b1 (#64, Sep  8 2000, 23:37:06)  [CW PPC w/GUSI2 w/THREADS]
Copyright (c) 2000 BeOpen.com.
All Rights Reserved.

 [...] 

>>> import thread
>>> import threading
Traceback (most recent call last):
  File "<input>", line 1, in ?
  File "Work:Python 2.0preb1:Lib:threading.py", line 538, in ?
    _MainThread()
  File "Work:Python 2.0preb1:Lib:threading.py", line 465, in __init__
    import atexit
ImportError: No module named atexit


(I'll try exercising some old scripts and see what else happens.)

---|  Steven D. Majewski   (804-982-0831)  <sdm7g at Virginia.EDU>  |---
---|  Department of Molecular Physiology and Biological Physics  |---
---|  University of Virginia             Health Sciences Center  |---
---|  P.O. Box 10011            Charlottesville, VA  22906-0011  |---
		"All operating systems want to be unix, 
		 All programming languages want to be lisp." 




From barry at scottb.demon.co.uk  Sat Sep  9 12:40:04 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sat, 9 Sep 2000 11:40:04 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENFHEAA.tim_one@email.msn.com>
Message-ID: <000001c01a4a$5066f280$060210ac@private>

I understand what you did and why. What I think is wrong is to use the
same name for the filename of the windows installer, source tar etc.

Each kit has a unique version but you have not reflected it in the
filenames. Only the filename is visible in a browser.

Why can't you add the 3 vs. 4 mark to the file name?

I cannot see the time stamp from a browser without downloading the file.

Won't you be getting bug reports against 2.0b1 and not know which one
the user has unless that realise to tell them that the #n is important?

You don't have any quick way to check that the webmaster on CNRI has change
the file to your newer version without downloading it.

I'm sure there are other tasks that user and developers will find made harder.

	BArry




From tim_one at email.msn.com  Sat Sep  9 13:18:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 9 Sep 2000 07:18:21 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000001c01a4a$5066f280$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBJHFAA.tim_one@email.msn.com>

Sorry, but I can't do anything more about this now.  The notice was supposed
to go up on the website at the same instant as the new installer, but the
people who can actually put the notice up *still* haven't done it.

In the future I'll certainly change the filename, should this ever happen
again (and, no, I can't change the filename from here either).

In the meantime, you don't want to hear this, but you're certainly free to
change the filenames on your end <wink -- but nobody yet has reported an
actual real-life confusion related to this, so while it may suck in theory,
practice appears much more forgiving>.

BTW, I didn't understand the complaint about "same name for the filename of
the windows installer, source tar etc.".  The *only* file I had replaced was

    BeOpen-Python-2.0b1.exe

I guess Fred replaced the PDF-format doc downloads too?  IIRC, those were
totally broken.  Don't think anything else was changed.

About bug reports, the only report of any possible relevance will be "I
tried to load the xml package under Windows 2.0b1, but got an
ImportError" -- and the cause of that will be obvious.  Also remember that
this is a beta release:  by definition, anyone using it at all a few weeks
from now is entirely on their own.

> -----Original Message-----
> From: Barry Scott [mailto:barry at scottb.demon.co.uk]
> Sent: Saturday, September 09, 2000 6:40 AM
> To: Tim Peters; python-dev at python.org
> Subject: RE: [Python-Dev] xml missing in Windows installer?
>
>
> I understand what you did and why. What I think is wrong is to use the
> same name for the filename of the windows installer, source tar etc.
>
> Each kit has a unique version but you have not reflected it in the
> filenames. Only the filename is visible in a browser.
>
> Why can't you add the 3 vs. 4 mark to the file name?
>
> I cannot see the time stamp from a browser without downloading the file.
>
> Won't you be getting bug reports against 2.0b1 and not know which one
> the user has unless that realise to tell them that the #n is important?
>
> You don't have any quick way to check that the webmaster on CNRI
> has change
> the file to your newer version without downloading it.
>
> I'm sure there are other tasks that user and developers will find
> made harder.
>
> 	BArry





From MarkH at ActiveState.com  Sat Sep  9 17:36:54 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sun, 10 Sep 2000 02:36:54 +1100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000001c01a4a$5066f280$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEJHDIAA.MarkH@ActiveState.com>

> I understand what you did and why. What I think is wrong is to use the
> same name for the filename of the windows installer, source tar etc.

Seeing as everyone (both of you <wink>) is hassling Tim, let me also stick
up for the actions.  This is a beta release, and as Tim said, is not any
sort of fix, other than what is installed.  The symptoms are obvious.
Sheesh - most people will hardly be aware xml support is _supposed_ to be
there :-)

I can see the other POV, but I don't think this is worth the administrative
overhead of a newly branded release.

Feeling-chatty, ly.

Mark.




From jack at oratrix.nl  Sun Sep 10 00:53:50 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sun, 10 Sep 2000 00:53:50 +0200
Subject: [Python-Dev] Re: [Pythonmac-SIG] Need some hands to debug MacPython installer
In-Reply-To: Message by "Steven D. Majewski" <sdm7g@virginia.edu> ,
	     Fri, 8 Sep 2000 20:23:45 -0400 (EDT) , <Pine.A32.3.90.1000908201956.15033A-100000@elvis.med.Virginia.EDU> 
Message-ID: <20000909225355.381DDD71FF@oratrix.oratrix.nl>

Oops, indeed some of the new modules were inadvertantly excluded. I'll 
create a new installer tomorrow (which should also contain the
documentation and such).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From barry at scottb.demon.co.uk  Sun Sep 10 23:38:34 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sun, 10 Sep 2000 22:38:34 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
Message-ID: <000201c01b6f$78594510$060210ac@private>

I just checked the announcement on www.pythonlabs.com that its not mentioned.

		Barry




From barry at scottb.demon.co.uk  Sun Sep 10 23:35:33 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sun, 10 Sep 2000 22:35:33 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEJHDIAA.MarkH@ActiveState.com>
Message-ID: <000101c01b6f$0cc94250$060210ac@private>

I guess you had not seen Tim's reply. I read his reply as understanding
the problem and saying that things will be done better for future kits.

I glad that you will have unique names for each of the beta releases.
This will allow beta testers to accurately report which beta kit they
see a problem in. That in turn will make fixing bug reports from the
beta simpler for the maintainers.

	BArry




From tim_one at email.msn.com  Mon Sep 11 00:21:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 10 Sep 2000 18:21:41 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000101c01b6f$0cc94250$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEEHHFAA.tim_one@email.msn.com>

[Barry Scott, presumably to Mark Hammond]
> I guess you had not seen Tim's reply.

Na, I think he did.  I bet he just thought you were being unbearably anal
about a non-problem in practice and wanted to annoy you back <wink>.

> I read his reply as understanding the problem and saying that things
> will be done better for future kits.

Oh yes.  We tried to take a shortcut, and it backfired.  I won't let that
happen again, and you were right to point it out (once <wink>).  BTW, the
notice *is* on the web site now, but depending on which browser you're
using, it may appear in a font so small it can't even been read!  The worst
part of moving to BeOpen.com so far was getting hooked up with professional
web designers who think HTML *should* be used for more than just giant
monolothic plain-text dumps <0.9 wink>; we can't change their elaborate
pages without extreme pain.

but-like-they-say-it's-the-sizzle-not-the-steak-ly y'rs  - tim





From tim_one at email.msn.com  Mon Sep 11 00:22:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 10 Sep 2000 18:22:06 -0400
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <000201c01b6f$78594510$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>

> I just checked the announcement on www.pythonlabs.com that its 
> not mentioned.

All bugs get reported on SourceForge.





From gward at mems-exchange.org  Mon Sep 11 15:53:53 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 11 Sep 2000 09:53:53 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B94B85.BFD16019@lemburg.com>; from mal@lemburg.com on Fri, Sep 08, 2000 at 10:26:45PM +0200
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com>
Message-ID: <20000911095352.A24415@ludwig.cnri.reston.va.us>

On 08 September 2000, M.-A. Lemburg said:
> To provide more flexibility to the third-party tools in such
> a situation, I think it would be worthwhile moving the
> site-packages/ entry in sys.path in front of the lib/python2.0/
> entry.
> 
> That way a third party tool can override the standard lib's
> package or module or take appropriate action to reintegrate
> the standard lib's package namespace into an extended one.

+0 -- I actually *like* the ability to upgrade/override bits of the
standard library; this is occasionally essential, particularly when
there are modules (or even namespaces) in the standard library that have
lives (release cycles) of their own independent of Python and its
library.

There's already a note in the Distutils README.txt about how to upgrade
the Distutils under Python 1.6/2.0; it boils down to, "rename
lib/python/2.0/distutils and then install the new version".  Are PyXML,
asyncore, cPickle, etc. going to need similar qualifications in its
README?  Are RPMs (and other smart installers) of these modules going to
have to include code to do the renaming for you?

Ugh.  It's a proven fact that 73% of users don't read README files[1],
and I have a strong suspicion that the reliability of an RPM (or
whatever) decreases in proportion to the amount of
pre/post-install/uninstall code that it carries around with it.  I think
reordering sys.path would allow people to painlessly upgrade bits of the
standard library, and the benefits of this outweigh the "but then it's
not standard anymore!" objection.

        Greg

[1] And 65% of statistics are completely made up!



From cgw at fnal.gov  Mon Sep 11 20:55:09 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 11 Sep 2000 13:55:09 -0500 (CDT)
Subject: [Python-Dev] find_recursionlimit.py vs. libpthread vs. linux
Message-ID: <14781.10893.273438.446648@buffalo.fnal.gov>

It has been noted by people doing testing on Linux systems that

ulimit -s unlimited
python Misc/find_recursionlimit.py

will run for a *long* time if you have built Python without threads, but
will die after about 2400/2500 iterations if you have built with
threads, regardless of the "ulimit" setting.

I had thought this was evidence of a bug in Pthreads.  In fact
(although we still have other reasons to suspect Pthread bugs),
the behavior is easily explained.  The function "pthread_initialize"
in pthread.c contains this very lovely code:

  /* Play with the stack size limit to make sure that no stack ever grows
     beyond STACK_SIZE minus two pages (one page for the thread descriptor
     immediately beyond, and one page to act as a guard page). */
  getrlimit(RLIMIT_STACK, &limit);
  max_stack = STACK_SIZE - 2 * __getpagesize();
  if (limit.rlim_cur > max_stack) {
    limit.rlim_cur = max_stack;
    setrlimit(RLIMIT_STACK, &limit);
  }

In "internals.h", STACK_SIZE is #defined to (2 * 1024 * 1024)

So whenever you're using threads, you have an effective rlimit of 2MB
for stack, regardless of what you may *think* you have set via 
"ulimit -s"

One more mystery explained!






From gward at mems-exchange.org  Mon Sep 11 23:13:00 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 11 Sep 2000 17:13:00 -0400
Subject: [Python-Dev] Off-topic: common employee IP agreements?
Message-ID: <20000911171259.A26210@ludwig.cnri.reston.va.us>

Hi all --

sorry for the off-topic post.  I'd like to get a calibration reading
from other members of the open source community on an issue that's
causing some controversy around here: what sort of employee IP
agreements do other software/open source/Python/Linux/Internet-related
companies require their employees to sign?

I'm especially curious about companies that are prominent in the open
source world, like Red Hat, ActiveState, VA Linux, or SuSE; and big
companies that are involved in open source, like IBM or HP.  I'm also
interested in what universities, both around the world and in the U.S.,
impose on faculty, students, and staff.  If you have knowledge -- or
direct experience -- with any sort of employee IP agreement, though, I'm
curious to hear about it.  If possible, I'd like to get my hands on the
exact document your employer uses -- precedent is everything!  ;-)

Thanks -- and please reply to me directly; no need to pollute python-dev
with more off-topic posts.

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From guido at beopen.com  Tue Sep 12 01:10:31 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:10:31 -0500
Subject: [Python-Dev] obsolete urlopen.py in CVS
In-Reply-To: Your message of "Fri, 08 Sep 2000 13:13:48 EST."
             <14777.11356.106477.440474@buffalo.fnal.gov> 
References: <14777.8009.543626.966203@buffalo.fnal.gov> <00ea01c019bc$1929f4e0$766940d5@hagrid>  
            <14777.11356.106477.440474@buffalo.fnal.gov> 
Message-ID: <200009112310.SAA08374@cj20424-a.reston1.va.home.com>

> Fredrik Lundh writes:
> 
>  > what exactly are you doing to check things out?

[Charles]
> cvs update -dAP
> 
>  > note that CVS may check things out from the Attic under
>  > certain circumstances, like if you do "cvs update -D".  see
>  > the CVS FAQ for more info.
> 
> No, I am not using the '-D' flag.

I would drop the -A flag -- what's it used for?

I've done the same dance for urlopen.py and it seems to have
disappeared now.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 01:14:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:14:38 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Fri, 08 Sep 2000 15:47:08 +0200."
             <200009081347.PAA13686@python.inrialpes.fr> 
References: <200009081347.PAA13686@python.inrialpes.fr> 
Message-ID: <200009112314.SAA08409@cj20424-a.reston1.va.home.com>

[Vladimir]
> Seems like people are very surprised to see "print >> None" defaulting
> to "print >> sys.stderr". I must confess that now that I'm looking at
> it and after reading the PEP, this change lacks some argumentation.
> 
> In Python, this form surely looks & feels like the Unix cat /dev/null,
> that is, since None doesn't have a 'write' method, the print statement
> is expected to either raise an exception or be specialized for None to mean
> "the print statement has no effect". The deliberate choice of sys.stderr
> is not obvious.
> 
> I understand that Guido wanted to say "print >> None, args == print args"
> and simplify the script logic, but using None in this case seems like a
> bad spelling <wink>.
> 
> I have certainly carefully avoided any debates on the issue as I don't
> see myself using this feature any time soon, but when I see on c.l.py
> reactions of surprise on weakly argumented/documented features and I
> kind of feel the same way, I'd better ask for more arguments here myself.

(I read the followup and forgive you sys.stderr; didn't want to follow
up to the rest of the thread because it doesn't add much.)

After reading the little bit of discussion here, I still think
defaulting None to sys.stdout is a good idea.

Don't think of it as

  print >>None, args

Think of it as

  def func(file=None):
    print >>file, args

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Tue Sep 12 00:24:13 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 11 Sep 2000 18:24:13 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112314.SAA08409@cj20424-a.reston1.va.home.com>
References: <200009081347.PAA13686@python.inrialpes.fr>
	<200009112314.SAA08409@cj20424-a.reston1.va.home.com>
Message-ID: <14781.23437.165189.328323@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  GvR> Don't think of it as

  GvR>   print >>None, args

  GvR> Think of it as

  GvR>   def func(file=None):
  GvR>     print >>file, args

Huh?  Don't you mean think of it as:

def func(file=None):
    if file is None:
       import sys
       print >>sys.stdout, args
    else:
	print >>file, args

At least, I think that's why I find the use of None confusing.  I find
it hard to make a strong association between None and sys.stdout.  In
fact, when I was typing this message, I wrote it as sys.stderr and
only discovered my error upon re-reading the initial message.

Jeremy



From bwarsaw at beopen.com  Tue Sep 12 00:28:31 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 11 Sep 2000 18:28:31 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
	<200009112314.SAA08409@cj20424-a.reston1.va.home.com>
	<14781.23437.165189.328323@bitdiddle.concentric.net>
Message-ID: <14781.23695.934627.439238@anthem.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy at beopen.com> writes:

    JH> At least, I think that's why I find the use of None confusing.
    JH> I find it hard to make a strong association between None and
    JH> sys.stdout.  In fact, when I was typing this message, I wrote
    JH> it as sys.stderr and only discovered my error upon re-reading
    JH> the initial message.

I think of it more like Vladimir does: "print >>None" should be
analogous to catting to /dev/null.

-Barry



From guido at beopen.com  Tue Sep 12 01:31:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:31:35 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Mon, 11 Sep 2000 18:24:13 -0400."
             <14781.23437.165189.328323@bitdiddle.concentric.net> 
References: <200009081347.PAA13686@python.inrialpes.fr> <200009112314.SAA08409@cj20424-a.reston1.va.home.com>  
            <14781.23437.165189.328323@bitdiddle.concentric.net> 
Message-ID: <200009112331.SAA08558@cj20424-a.reston1.va.home.com>

> >>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:
> 
>   GvR> Don't think of it as
> 
>   GvR>   print >>None, args
> 
>   GvR> Think of it as
> 
>   GvR>   def func(file=None):
>   GvR>     print >>file, args
> 
> Huh?  Don't you mean think of it as:
> 
> def func(file=None):
>     if file is None:
>        import sys
>        print >>sys.stdout, args
>     else:
> 	print >>file, args

I meant what I said.  I meant that you shouldn't think of examples
like the first one (which looks strange, just like "".join(list) does)
but examples like the second one, which (in my eye) make for more
readable and more maintainable code.

> At least, I think that's why I find the use of None confusing.  I find
> it hard to make a strong association between None and sys.stdout.  In
> fact, when I was typing this message, I wrote it as sys.stderr and
> only discovered my error upon re-reading the initial message.

You don't have to make a strong association with sys.stdout.  When the
file expression is None, the whole ">>file, " part disappears!

Note that the writeln() function, proposed by many, would have the
same behavior:

  def writeln(*args, file=None):
      if file is None:
          file = sys.stdout
      ...write args...

I know that's not legal syntax, but that's the closest
approximation.  This is intended to let you specify file=<some file>
and have the default be sys.stdout, but passing an explicit value of
None has the same effect as leaving it out.  This idiom is used in
lots of places!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 01:35:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:35:20 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Mon, 11 Sep 2000 18:28:31 -0400."
             <14781.23695.934627.439238@anthem.concentric.net> 
References: <200009081347.PAA13686@python.inrialpes.fr> <200009112314.SAA08409@cj20424-a.reston1.va.home.com> <14781.23437.165189.328323@bitdiddle.concentric.net>  
            <14781.23695.934627.439238@anthem.concentric.net> 
Message-ID: <200009112335.SAA08609@cj20424-a.reston1.va.home.com>

>     JH> At least, I think that's why I find the use of None confusing.
>     JH> I find it hard to make a strong association between None and
>     JH> sys.stdout.  In fact, when I was typing this message, I wrote
>     JH> it as sys.stderr and only discovered my error upon re-reading
>     JH> the initial message.
> 
> I think of it more like Vladimir does: "print >>None" should be
> analogous to catting to /dev/null.

Strong -1 on that.  You can do that with any number of other
approaches.

If, as a result of a misplaced None, output appears at the wrong place
by accident, it's easy to figure out why.  If it disappears
completely, it's a much bigger mystery because you may start
suspecting lots of other places.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 01:22:46 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 01:22:46 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112331.SAA08558@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 11, 2000 06:31:35 PM
Message-ID: <200009112322.BAA29633@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> >   GvR> Don't think of it as
> > 
> >   GvR>   print >>None, args
> > 
> >   GvR> Think of it as
> > 
> >   GvR>   def func(file=None):
> >   GvR>     print >>file, args

I understand that you want me to think this way. But that's not my
intuitive thinking. I would have written your example like this:

def func(file=sys.stdout):
    print >> file, args

This is a clearer, compared to None which is not a file.

> ...  This is intended to let you specify file=<some file>
> and have the default be sys.stdout, but passing an explicit value of
> None has the same effect as leaving it out.  This idiom is used in
> lots of places!

Exactly.
However my expectation would be to leave out the whole print statement.
I think that any specialization of None is mysterious and would be hard
to teach. From this POV, I agree with MAL that raising an exception is
the cleanest and the simplest way to do it. Any specialization of my
thought here is perceived as a burden.

However, if such specialization is desired, I'm certainly closer to
/dev/null than sys.stdout. As long as one starts redirecting output,
I believe that one already has enough knowledge about files, and in
particular about stdin, stdout and stderr. None in the sense of /dev/null
is not so far from that. It is a simple concept. But this is already
"advanced knowledge" about redirecting output on purpose.

So as long as one uses extended print, she's already an advanced user.

From tim_one at email.msn.com  Tue Sep 12 03:27:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 11 Sep 2000 21:27:10 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> ...
> As long as one starts redirecting output, I believe that one already
> has enough knowledge about files, and in particular about stdin,
> stdout and stderr. None in the sense of /dev/null is not so far from
> that.  It is a simple concept. But this is already "advanced
> knowledge" about redirecting output on purpose.

This is so Unix-centric, though; e.g., native windows users have only the
dimmest knowledge of stderr, and almost none of /dev/null.  Which ties in
to:

> So as long as one uses extended print, she's already an advanced user.

Nope!  "Now how did I get this to print to a file instead?" is one of the
faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
a file-like object?  what?! etc").

This is one of those cases where Guido is right, but for reasons nobody can
explain <0.8 wink>.

sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim





From paul at prescod.net  Tue Sep 12 07:34:10 2000
From: paul at prescod.net (Paul Prescod)
Date: Mon, 11 Sep 2000 22:34:10 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <39BDC052.A9FEDE80@prescod.net>

Vladimir Marangozov wrote:
> 
>...
> 
> def func(file=sys.stdout):
>     print >> file, args
> 
> This is a clearer, compared to None which is not a file.

I've gotta say that I agree with you on all issues. If I saw that
file=None stuff in code in another programming language I would expect
it meant send the output nowhere. People who want sys.stdout can get it.
Special cases aren't special enough to break the rules!
-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From effbot at telia.com  Tue Sep 12 09:10:53 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 09:10:53 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <003001c01c88$aad09420$766940d5@hagrid>

Vladimir wrote:
> I understand that you want me to think this way. But that's not my
> intuitive thinking. I would have written your example like this:
> 
> def func(file=sys.stdout):
>     print >> file, args
> 
> This is a clearer, compared to None which is not a file.

Sigh.  You code doesn't work.  Quoting the PEP, from the section
that discusses why passing None is the same thing as passing no
file at all:

    "Note: defaulting the file argument to sys.stdout at compile time
    is wrong, because it doesn't work right when the caller assigns to
    sys.stdout and then uses tables() without specifying the file."

I was sceptical at first, but the more I see of your counter-arguments,
the more I support Guido here.  As he pointed out, None usually means
"pretend I didn't pass this argument" in Python.  No difference here.

+1 on keeping print as it's implemented (None means default).
-1 on making None behave like a NullFile.

</F>




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 16:11:14 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 16:11:14 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com> from "Tim Peters" at Sep 11, 2000 09:27:10 PM
Message-ID: <200009121411.QAA30848@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov]
> > ...
> > As long as one starts redirecting output, I believe that one already
> > has enough knowledge about files, and in particular about stdin,
> > stdout and stderr. None in the sense of /dev/null is not so far from
> > that.  It is a simple concept. But this is already "advanced
> > knowledge" about redirecting output on purpose.
> 
> This is so Unix-centric, though; e.g., native windows users have only the
> dimmest knowledge of stderr, and almost none of /dev/null.

Ok, forget about /dev/null. It was just a spelling of "print to None"
which has a meaning even in spoken English.


> Which ties in to:
> 
> > So as long as one uses extended print, she's already an advanced user.
> 
> Nope!  "Now how did I get this to print to a file instead?" is one of the
> faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
> past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
> a file-like object?  what?! etc").

Look, this is getting silly. You can't align the experienced users' level
of knowledge to the one of newbies. What I'm trying to make clear here is
that you're not disturbing newbies, you're disturbing experienced users
and teachers who are supposed to transmit their knowledge to these newbies.

FWIW, I am one of these teachers and I have had enough classes in this
domain to trust my experience and my judgement on the students' logic
more than Guido's and your's perceptions taken together about *this*
feature in particlar. If you want real feedback from newbies, don't take
c.l.py as the reference -- you'd better go to the nearest school or
University and start teaching.  (how's that as a reply to your attempts
to make me think one way or another or trust abbreviations <0.1 wink>)

As long as you have embarked in the output redirection business, you
have done so explicitely, because you're supposed to understand what it
means and how it works. This is "The Next Level" in knowledge, implying
that whenever you use extended print *explicitely*, you're supposed to
provide explicitely the target of the output.

Reverting that back with None, by saying that "print >> None == print"
is illogical, because you've already engaged in this advanced concept.
Rolling back your explicit decision about dealing with redirected output
with an explicit None (yes, you must provide it explicitely to fall back
to the opriginal behavior) is the wrong path of reasoning.  If you don't
want to redirect output, don't use extended print in the first place.
And if you want to achieve the effect of "simple" print, you should pass
sys.stdout.

I really don't see the point of passing explicitely None instead of
passing sys.stdout, once you've made your decision about redirecting
output. And in this regard, both Guido and you have not provided any
arguments that would make me think that you're probably right.
I understand very well your POV, you don't seem to understand mine.

And let me add to that the following summary: the whole extended
print idea is about convenience. Convenience for those that know
what file redirection is. Not for newbies. You can't argue too much
about extended print as an intuitive concept for newbies. The present
change disturbs experienced users (the >> syntax aside) and you get
signals about that from them, because the current behavior does not
comply with any existing concept as far as file redirection is concerned.
However, since these guys are experienced and knowledgable, they already
understand this game quite well. So what you get is just "Oh really? OK,
this is messy" from the chatty ones and everybody moves on.  The others
just don't care, but they not necessarily agree.

I don't care either, but fact is that I've filled two screens of text
explaining you that you're playing with 2 different knowledge levels.
You shouldn't try to reduce the upper level to the lower one, just because
you think it is more Pythonic for newbies. You'd better take the opposite
direction and raise the newbie stadard to what happens to be a very well
known concept in the area of computer programming, and in CS in gerenal.

To provoke you a bit more, I'll tell you that I see no conceptual difference
between
             print >> None, args

and
             print >> 0, args -or- print >> [], args  -or- print >> "", args

(if you prefer, you can replace (), "", [], etc. with a var name, which can be
 assigned these values)

That is, I don't see a conceptual difference between None and any object
which evaluates to false. However, the latter are not allowed. Funny,
isn't it.  What makes None so special? <wink>

Now, the only argument I got is the one Fredrik has quoted from the PEP,
dealing with passing the default file as a parameter. I'll focus briefly
on it.

[Fredrik]

> [me]
> > def func(file=sys.stdout):
> >     print >> file, args
> > 
> > This is a clearer, compared to None which is not a file.
>
> Sigh.  You code doesn't work.  Quoting the PEP, from the section
> that discusses why passing None is the same thing as passing no
> file at all:
> 
>     "Note: defaulting the file argument to sys.stdout at compile time
>     is wrong, because it doesn't work right when the caller assigns to
>     sys.stdout and then uses tables() without specifying the file."

Of course that it doesn't work if you assign to sys.stdout. But hey,
if you assign to sys.stdout, you know what 'sys' is, what 'sys.stdout' is,
and you know basically everything about std files and output. Don't you?

Anyway, this argument is a flawed, because the above is in no way
different than the issues raised when you define a default argument
which is a list, dict, tuple, etc. Compile time evaluation of default args
is a completely different discussion and extended print has (almost)
nothing to do with that. Guido has made this (strange) association between
two different subjects, which, btw, I perceive as an additional burden.

It is far better to deal with the value of the default argument within
the body of the function: this way, there are no misunderstandings.
None has all the symptoms of a hackish shortcut here.

> 
> This is one of those cases where Guido is right, but for reasons nobody can
> explain <0.8 wink>.

I'm sorry. I think that this is one of those rare cases where he is wrong.
His path of reasoning is less straigtforward, and I can't adopt it. And
it seems like I'm not alone. If you ever see a columnist talking about
Python's features and extended print (mentioning print >> None as a good
thing), please let me know about it.

> 
> sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim
> 

I would have preferred arguments. The PEP and your responses lack them
which is another sign about this feature.


stop-troubadouring-about-blind-BDFL-compliance-in-public'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Tue Sep 12 16:48:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 16:48:11 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009121411.QAA30848@python.inrialpes.fr>
Message-ID: <004801c01cc8$7ed99700$766940d5@hagrid>

> > Sigh.  You code doesn't work.  Quoting the PEP, from the section
> > that discusses why passing None is the same thing as passing no
> > file at all:
> > 
> >     "Note: defaulting the file argument to sys.stdout at compile time
> >     is wrong, because it doesn't work right when the caller assigns to
> >     sys.stdout and then uses tables() without specifying the file."
> 
> Of course that it doesn't work if you assign to sys.stdout. But hey,
> if you assign to sys.stdout, you know what 'sys' is, what 'sys.stdout' is,
> and you know basically everything about std files and output. Don't you?

no.  and since you're so much smarter than everyone else,
you should be able to figure out why.

followups to /dev/null, please.

</F>




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 19:12:04 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 19:12:04 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <004801c01cc8$7ed99700$766940d5@hagrid> from "Fredrik Lundh" at Sep 12, 2000 04:48:11 PM
Message-ID: <200009121712.TAA31347@python.inrialpes.fr>

Fredrik Lundh wrote:
> 
> no.  and since you're so much smarter than everyone else,
> you should be able to figure out why.
> 
> followups to /dev/null, please.

pass


print >> pep-0214.txt, next_argument_if_not_None 'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tismer at appliedbiometrics.com  Tue Sep 12 18:35:13 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Tue, 12 Sep 2000 19:35:13 +0300
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr> <003001c01c88$aad09420$766940d5@hagrid>
Message-ID: <39BE5B41.16143E76@appliedbiometrics.com>


Fredrik Lundh wrote:
> 
> Vladimir wrote:
> > I understand that you want me to think this way. But that's not my
> > intuitive thinking. I would have written your example like this:
> >
> > def func(file=sys.stdout):
> >     print >> file, args
> >
> > This is a clearer, compared to None which is not a file.

This is not clearer.
Instead, it is presetting a parameter
with a mutable object - bad practice!

> Sigh.  You code doesn't work.  Quoting the PEP, from the section
> that discusses why passing None is the same thing as passing no
> file at all:
> 
>     "Note: defaulting the file argument to sys.stdout at compile time
>     is wrong, because it doesn't work right when the caller assigns to
>     sys.stdout and then uses tables() without specifying the file."
> 
> I was sceptical at first, but the more I see of your counter-arguments,
> the more I support Guido here.  As he pointed out, None usually means
> "pretend I didn't pass this argument" in Python.  No difference here.
> 
> +1 on keeping print as it's implemented (None means default).
> -1 on making None behave like a NullFile.

Seconded!

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From nascheme at enme.ucalgary.ca  Tue Sep 12 20:03:55 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Tue, 12 Sep 2000 12:03:55 -0600
Subject: [Python-Dev] PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER>; from Brent Fulgham on Tue, Sep 12, 2000 at 10:40:36AM -0700
References: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER>
Message-ID: <20000912120355.A2457@keymaster.enme.ucalgary.ca>

You probably want to address the python-dev mailing list.  I have CCed
this message in the hope that some of the more experienced developers
can help.  The PyWX website is at: http://pywx.idyll.org/.

On Tue, Sep 12, 2000 at 10:40:36AM -0700, Brent Fulgham wrote:
> We've run across some problems with the Python's internal threading
> design, and its handling of module loading.
> 
> The AOLserver plugin spawns new Python interpreter threads to
> service new HTTP connections.  Each thread is theoretically its
> own interpreter, and should have its own namespace, set of loaded
> packages, etc.
> 
> This is largely true, but we run across trouble with the way
> the individual threads handle 'argv' variables and current
> working directory.
> 
> CGI scripts typically pass data as variables to the script
> (as argv).  These (unfortunately) are changed globally across
> all Python interpreter threads, which can cause problems....
> 
> In addition, the current working directory is not unique
> among independent Python interpreters.  So if a script changes
> its directory to something, all other running scripts (in
> unique python interpreter threads) now have their cwd set to
> this directory.
> 
> So we have to address these issues at some point...  Any hope
> that something like this could be fixed in 2.0?

Are you using separate interpreters or one interpreter with multiple
threads?  It sounds like the latter.  If you use the latter than
definately things like the process address space and the current working
directory are shared across the threads.  I don't think I understand
your design.  Can you explain the architecture of PyWX?

  Neil



From brent.fulgham at xpsystems.com  Tue Sep 12 20:18:03 2000
From: brent.fulgham at xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 11:18:03 -0700
Subject: [Python-Dev] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>

> Are you using separate interpreters or one interpreter with multiple
> threads?  It sounds like the latter.  If you use the latter than
> definately things like the process address space and the 
> current working directory are shared across the threads.  I don't 
> think I understand your design.  Can you explain the architecture
> of PyWX?
> 

There are some documents on the website that give a bit more detail,
but in a nutshell we were using the Python interpreter thread concept
(Py_InterpreterNew, etc.) to allow 'independent' interpreters to
service HTTP requests in the server.

We are basically running afoul of the problems with the interpreter
isolation, as documented in the various Python embed docs.

"""Because sub-interpreters (and the main interpreter) are part of
the same process, the insulation between them isn't perfect -- for 
example, using low-level file operations like os.close() they can
(accidentally or maliciously) affect each other's open files. 
Because of the way extensions are shared between (sub-)interpreters,
some extensions may not work properly; this is especially likely
when the extension makes use of (static) global variables, or when
the extension manipulates its module's dictionary after its 
initialization"""

So we are basically stuck.  We can't link against Python multiple
times, so our only avenue to provide multiple interpreter instances
is to use the "Py_InterpreterNew" call and hope for the best.

Any hope for better interpreter isolation in 2.0? (2.1?)

-Brent




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 20:51:21 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 20:51:21 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39BE5B41.16143E76@appliedbiometrics.com> from "Christian Tismer" at Sep 12, 2000 07:35:13 PM
Message-ID: <200009121851.UAA31622@python.inrialpes.fr>

Christian Tismer wrote:
> 
> > Vladimir wrote:
> > > I understand that you want me to think this way. But that's not my
> > > intuitive thinking. I would have written your example like this:
> > >
> > > def func(file=sys.stdout):
> > >     print >> file, args
> > >
> > > This is a clearer, compared to None which is not a file.
> 
> This is not clearer.
> Instead, it is presetting a parameter
> with a mutable object - bad practice!

I think I mentioned that default function args and explicit output
streams are two disjoint issues. In the case of extended print,
half of us perceive that as a mix of concepts unrelated to Python,
the other half sees them as natural for specifying default behavior
in Python. The real challenge about print >> None is that the latter
half would need to explain to the first one (including newcomers with
various backgrounds) that this is natural thinking in Python. I am
sceptical about the results, as long as one has to explain that this
is done on purpose to someone who thinks that this is a mix of concepts.

A naive illustration to the above is that "man fprintf" does not say
that when the stream is NULL, fprintf behaves like printf. Indeed,
fprintf(NULL, args) dumps core. There are two distinct functions for
different things. Either you care and you use fprintf (print >> ),
either you don't care and you use printf (print). Not both. If you
think you can do both in one shot, elaborate on that magic in the PEP.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From cgw at fnal.gov  Tue Sep 12 20:47:31 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 12 Sep 2000 13:47:31 -0500 (CDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
Message-ID: <14782.31299.800325.803340@buffalo.fnal.gov>

Python 1.5.2 (#3, Feb 11 2000, 15:30:14)  [GCC 2.7.2.3.f.1] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import rexec
>>> r = rexec.RExec()
>>> r.r_exec("import re")
>>> 

Python 2.0b1 (#2, Sep  8 2000, 12:10:17) 
[GCC 2.95.2 19991024 (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import rexec
>>> r=rexec.RExec()
>>> r.r_exec("import re")

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/rexec.py", line 253, in r_exec
    exec code in m.__dict__
  File "<string>", line 1, in ?
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/re.py", line 28, in ?
    from sre import *
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/sre.py", line 19, in ?
    import sre_compile
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/sre_compile.py", line 11, in ?
    import _sre
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 439, in find_head_package
    raise ImportError, "No module named " + qname
ImportError: No module named _sre

Of course I can work around this by doing:

>>> r.ok_builtin_modules += '_sre',
>>> r.r_exec("import re")          

But I really shouldn't have to do this, right?  _sre is supposed to be
a low-level implementation detail.  I think I should still be able to 
"import re" in an restricted environment without having to be aware of
_sre.



From effbot at telia.com  Tue Sep 12 21:12:20 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:12:20 +0200
Subject: [Python-Dev] urllib problems under 2.0
Message-ID: <005e01c01ced$6bb19180$766940d5@hagrid>

the proxy code in 2.0b1's new urllib is broken on my box.

here's the troublemaker:

                proxyServer = str(_winreg.QueryValueEx(internetSettings,
                                                       'ProxyServer')[0])
                if ';' in proxyServer:        # Per-protocol settings
                    for p in proxyServer.split(';'):
                        protocol, address = p.split('=')
                        proxies[protocol] = '%s://%s' % (protocol, address)
                else:        # Use one setting for all protocols
                    proxies['http'] = 'http://%s' % proxyServer
                    proxies['ftp'] = 'ftp://%s' % proxyServer

now, on my box, the proxyServer string is "https=127.0.0.1:1080"
(an encryption proxy used by my bank), so the above code happily
creates the following proxy dictionary:

proxy = {
    "http": "http://https=127.0.0.1:1080"
    "ftp": "http://https=127.0.0.1:1080"
}

which, of course, results in a "host not found" no matter what URL
I pass to urllib...

:::

a simple fix would be to change the initial test to:

                if "=" in proxyServer:

does anyone have a better idea, or should I check this one
in right away?

</F>




From titus at caltech.edu  Tue Sep 12 21:14:12 2000
From: titus at caltech.edu (Titus Brown)
Date: Tue, 12 Sep 2000 12:14:12 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>; from brent.fulgham@xpsystems.com on Tue, Sep 12, 2000 at 11:18:03AM -0700
References: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>
Message-ID: <20000912121412.B6850@cns.caltech.edu>

-> > Are you using separate interpreters or one interpreter with multiple
-> > threads?  It sounds like the latter.  If you use the latter than
-> > definately things like the process address space and the 
-> > current working directory are shared across the threads.  I don't 
-> > think I understand your design.  Can you explain the architecture
-> > of PyWX?
-> > 
-> 
-> """Because sub-interpreters (and the main interpreter) are part of
-> the same process, the insulation between them isn't perfect -- for 
-> example, using low-level file operations like os.close() they can
-> (accidentally or maliciously) affect each other's open files. 
-> Because of the way extensions are shared between (sub-)interpreters,
-> some extensions may not work properly; this is especially likely
-> when the extension makes use of (static) global variables, or when
-> the extension manipulates its module's dictionary after its 
-> initialization"""
-> 
-> So we are basically stuck.  We can't link against Python multiple
-> times, so our only avenue to provide multiple interpreter instances
-> is to use the "Py_InterpreterNew" call and hope for the best.
-> 
-> Any hope for better interpreter isolation in 2.0? (2.1?)

Perhaps a better question is: is there any way to get around these problems
without moving from a threaded model (which we like) to a process model?

Many of the problems we're running into because of this lack of interpreter
isolation are due to the UNIX threading model, as I see it.  For example,
the low-level file operation interference, cwd problems, and environment
variable problems are all caused by UNIX's determination to share this stuff
across threads.  I don't see any way of changing this without causing far
more problems than we fix.

cheers,
--titus



From effbot at telia.com  Tue Sep 12 21:34:58 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:34:58 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009121851.UAA31622@python.inrialpes.fr>
Message-ID: <006e01c01cf0$921a4da0$766940d5@hagrid>

vladimir wrote:
> In the case of extended print, half of us perceive that as a mix of
> concepts unrelated to Python, the other half sees them as natural
> for specifying default behavior in Python.

Sigh.  None doesn't mean "default", it means "doesn't exist"
"nothing" "ingenting" "nada" "none" etc.

"def foo(): return" uses None to indicate that there was no
return value.

"map(None, seq)" uses None to indicate that there are really
no function to map things through.

"import" stores None in sys.modules to indicate that certain
package components doesn't exist.

"print >>None, value" uses None to indicate that there is
really no redirection -- in other words, the value is printed
in the usual location.

</None>




From effbot at telia.com  Tue Sep 12 21:40:04 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:40:04 +0200
Subject: [Python-Dev] XML runtime errors?
Message-ID: <009601c01cf1$467458e0$766940d5@hagrid>

stoopid question: why the heck is xmllib using
"RuntimeError" to flag XML syntax errors?

raise RuntimeError, 'Syntax error at line %d: %s' % (self.lineno, message)

what's wrong with "SyntaxError"?

</F>




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 21:43:32 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 21:43:32 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <006e01c01cf0$921a4da0$766940d5@hagrid> from "Fredrik Lundh" at Sep 12, 2000 09:34:58 PM
Message-ID: <200009121943.VAA31771@python.inrialpes.fr>

Fredrik Lundh wrote:
> 
> vladimir wrote:
> > In the case of extended print, half of us perceive that as a mix of
> > concepts unrelated to Python, the other half sees them as natural
> > for specifying default behavior in Python.
> 
> Sigh.  None doesn't mean "default", it means "doesn't exist"
> "nothing" "ingenting" "nada" "none" etc.
> 
> "def foo(): return" uses None to indicate that there was no
> return value.
> 
> "map(None, seq)" uses None to indicate that there are really
> no function to map things through.
> 
> "import" stores None in sys.modules to indicate that certain
> package components doesn't exist.
> 
> "print >>None, value" uses None to indicate that there is
> really no redirection -- in other words, the value is printed
> in the usual location.

PEP that without the import example (it's obfuscated). If you can add
more of them, you'll save yourself time answering questions. I couldn't
have done it, because I still belong to my half <wink>.

hard-to-make-progress-but-constructivism-wins-in-the-end'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Tue Sep 12 23:46:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:46:32 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Tue, 12 Sep 2000 12:14:12 MST."
             <20000912121412.B6850@cns.caltech.edu> 
References: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>  
            <20000912121412.B6850@cns.caltech.edu> 
Message-ID: <200009122146.QAA01374@cj20424-a.reston1.va.home.com>

> > This is largely true, but we run across trouble with the way
> > the individual threads handle 'argv' variables and current
> > working directory.
> > 
> > CGI scripts typically pass data as variables to the script
> > (as argv).  These (unfortunately) are changed globally across
> > all Python interpreter threads, which can cause problems....
> > 
> > In addition, the current working directory is not unique
> > among independent Python interpreters.  So if a script changes
> > its directory to something, all other running scripts (in
> > unique python interpreter threads) now have their cwd set to
> > this directory.

There's no easy way to fix the current directory problem.  Just tell
your CGI programmers that os.chdir() is off-limits; you may remove it
from the os module (and from the posix module) during initialization
of your interpreter to enforce this.

I don't understand how you would be sharing sys.argv between multiple
interpreters.  Sure, the initial sys.argv is the same (they all
inherit that from the C main()) but after that you can set it to
whatever you want and they should not be shared.

Are you *sure* you are using PyInterpreterState_New() and not just
creating new threads?

> -> So we are basically stuck.  We can't link against Python multiple
> -> times, so our only avenue to provide multiple interpreter instances
> -> is to use the "Py_InterpreterNew" call and hope for the best.
> -> 
> -> Any hope for better interpreter isolation in 2.0? (2.1?)
> 
> Perhaps a better question is: is there any way to get around these problems
> without moving from a threaded model (which we like) to a process model?
> 
> Many of the problems we're running into because of this lack of interpreter
> isolation are due to the UNIX threading model, as I see it.  For example,
> the low-level file operation interference, cwd problems, and environment
> variable problems are all caused by UNIX's determination to share this stuff
> across threads.  I don't see any way of changing this without causing far
> more problems than we fix.

That's the whole point of using threads -- they share as much state as
they can.  I don't see how you can do better without going to
processes.  You could perhaps maintain the illusion of a per-thread
current directory, but you'd have to modify every function that uses
pathnames to take the simulated pwd into account...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 23:48:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:48:47 -0500
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: Your message of "Tue, 12 Sep 2000 13:47:31 EST."
             <14782.31299.800325.803340@buffalo.fnal.gov> 
References: <14782.31299.800325.803340@buffalo.fnal.gov> 
Message-ID: <200009122148.QAA01404@cj20424-a.reston1.va.home.com>

> Python 1.5.2 (#3, Feb 11 2000, 15:30:14)  [GCC 2.7.2.3.f.1] on linux2
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> import rexec
> >>> r = rexec.RExec()
> >>> r.r_exec("import re")
> >>> 
> 
> Python 2.0b1 (#2, Sep  8 2000, 12:10:17) 
> [GCC 2.95.2 19991024 (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import rexec
> >>> r=rexec.RExec()
> >>> r.r_exec("import re")
> 
> Traceback (most recent call last):
[...]
> ImportError: No module named _sre
> 
> Of course I can work around this by doing:
> 
> >>> r.ok_builtin_modules += '_sre',
> >>> r.r_exec("import re")          
> 
> But I really shouldn't have to do this, right?  _sre is supposed to be
> a low-level implementation detail.  I think I should still be able to 
> "import re" in an restricted environment without having to be aware of
> _sre.

The rexec.py module needs to be fixed.  Should be simple enough.
There may be other modules that it should allow too!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 23:52:45 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:52:45 -0500
Subject: [Python-Dev] urllib problems under 2.0
In-Reply-To: Your message of "Tue, 12 Sep 2000 21:12:20 +0200."
             <005e01c01ced$6bb19180$766940d5@hagrid> 
References: <005e01c01ced$6bb19180$766940d5@hagrid> 
Message-ID: <200009122152.QAA01423@cj20424-a.reston1.va.home.com>

> the proxy code in 2.0b1's new urllib is broken on my box.

Before you fix this, let's figure out what the rules for proxy
settings in the registry are supposed to be, and document these.
How do these get set?

(This should also be documented for Unix if it isn't already; problems
with configuring proxies are ever-recurring questions it seems.  I
haven't used a proxy in years so I'm not good at fixing it... :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 23:55:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:55:48 -0500
Subject: [Python-Dev] XML runtime errors?
In-Reply-To: Your message of "Tue, 12 Sep 2000 21:40:04 +0200."
             <009601c01cf1$467458e0$766940d5@hagrid> 
References: <009601c01cf1$467458e0$766940d5@hagrid> 
Message-ID: <200009122155.QAA01452@cj20424-a.reston1.va.home.com>

[/F]
> stoopid question: why the heck is xmllib using
> "RuntimeError" to flag XML syntax errors?

Because it's too cheap to declare its own exception?

> raise RuntimeError, 'Syntax error at line %d: %s' % (self.lineno, message)
> 
> what's wrong with "SyntaxError"?

That would be the wrong exception unless it's parsing Python source
code.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Tue Sep 12 22:56:10 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 12 Sep 2000 16:56:10 -0400
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <200009122148.QAA01404@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Tue, Sep 12, 2000 at 04:48:47PM -0500
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com>
Message-ID: <20000912165610.A554@kronos.cnri.reston.va.us>

On Tue, Sep 12, 2000 at 04:48:47PM -0500, Guido van Rossum wrote:
>The rexec.py module needs to be fixed.  Should be simple enough.
>There may be other modules that it should allow too!

Are we sure that it's not possible to engineer segfaults or other
nastiness by deliberately feeding _sre bad data?  This was my primary
reason for not exposing the PCRE bytecode interface, since it would
have been difficult to make the code robust against hostile bytecodes.

--amk



From guido at beopen.com  Wed Sep 13 00:27:01 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 17:27:01 -0500
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: Your message of "Tue, 12 Sep 2000 16:56:10 -0400."
             <20000912165610.A554@kronos.cnri.reston.va.us> 
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com>  
            <20000912165610.A554@kronos.cnri.reston.va.us> 
Message-ID: <200009122227.RAA01676@cj20424-a.reston1.va.home.com>

[AMK]
> Are we sure that it's not possible to engineer segfaults or other
> nastiness by deliberately feeding _sre bad data?  This was my primary
> reason for not exposing the PCRE bytecode interface, since it would
> have been difficult to make the code robust against hostile bytecodes.

Good point!

But how do we support using the re module in restricted mode then?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Tue Sep 12 23:26:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 12 Sep 2000 16:26:49 -0500 (CDT)
Subject: [Python-Dev] urllib problems under 2.0
In-Reply-To: <200009122152.QAA01423@cj20424-a.reston1.va.home.com>
References: <005e01c01ced$6bb19180$766940d5@hagrid>
	<200009122152.QAA01423@cj20424-a.reston1.va.home.com>
Message-ID: <14782.40857.437768.652808@beluga.mojam.com>

    Guido> (This should also be documented for Unix if it isn't already;
    Guido> problems with configuring proxies are ever-recurring questions it
    Guido> seems.  I haven't used a proxy in years so I'm not good at fixing
    Guido> it... :-)

Under Unix, proxy server specifications are simply URLs (or URIs?) that
specify a protocol ("scheme" in urlparse parlance), a host and (usually) a
port, e.g.:

    http_proxy='http://manatee.mojam.com:3128' ; export http_proxy

I've been having an ongoing discussion with a Windows user who seems to be
stumbling upon the same problem that Fredrik encountered.  If I read the
urllib.getproxies_registry code correctly, it looks like it's expecting a
string that doesn't include a protocol, e.g. simply
"manatee.mojam.com:3128".  This seems a bit inflexible to me, since you
might want to offer multiprotocol proxies through a single URI (though that
may well be what Windows offers its users).  For instance, I believe Squid
will proxy both ftp and http requests via HTTP.  Requiring ftp proxies to do
so via ftp seems inflexible.  My thought (and I can't test this) is that the
code around urllib.py line 1124 should be

                else:        # Use one setting for all protocols
                    proxies['http'] = proxyServer
                    proxies['ftp'] = proxyServer

but that's just a guess based upon the values this other fellow has sent me
and assumes that the Windows registry is supposed to hold proxy informations
that contains the protocol.  I cc'd Mark Hammond on my last email to the
user.  Perhaps he'll have something interesting to say when he gets up.

Skip



From fdrake at beopen.com  Tue Sep 12 23:26:17 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 12 Sep 2000 17:26:17 -0400 (EDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <200009122227.RAA01676@cj20424-a.reston1.va.home.com>
References: <14782.31299.800325.803340@buffalo.fnal.gov>
	<200009122148.QAA01404@cj20424-a.reston1.va.home.com>
	<20000912165610.A554@kronos.cnri.reston.va.us>
	<200009122227.RAA01676@cj20424-a.reston1.va.home.com>
Message-ID: <14782.40825.627148.54355@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > But how do we support using the re module in restricted mode then?

  Perhaps providing a bastion wrapper around the re module, which
would allow the implementation details to be completely hidden?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 12 23:50:53 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 23:50:53 +0200
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com> <20000912165610.A554@kronos.cnri.reston.va.us>
Message-ID: <01d701c01d03$86dfdfa0$766940d5@hagrid>

andrew wrote:
> Are we sure that it's not possible to engineer segfaults or other
> nastiness by deliberately feeding _sre bad data?

it's pretty easy to trick _sre into reading from the wrong place
(however, it shouldn't be possible to return such data to the
Python level, and you can write into arbitrary locations).

fixing this would probably hurt performance, but I can look into it.

can the Bastion module be used to wrap entire modules?

</F>




From effbot at telia.com  Wed Sep 13 00:01:36 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 13 Sep 2000 00:01:36 +0200
Subject: [Python-Dev] XML runtime errors?
References: <009601c01cf1$467458e0$766940d5@hagrid>  <200009122155.QAA01452@cj20424-a.reston1.va.home.com>
Message-ID: <01f701c01d05$0aa98e20$766940d5@hagrid>

> [/F]
> > stoopid question: why the heck is xmllib using
> > "RuntimeError" to flag XML syntax errors?
> 
> Because it's too cheap to declare its own exception?

how about adding:

    class XMLError(RuntimeError):
        pass

(and maybe one or more XMLError subclasses?)

> > what's wrong with "SyntaxError"?
> 
> That would be the wrong exception unless it's parsing Python source
> code.

gotta fix netrc.py then...

</F>




From gstein at lyra.org  Tue Sep 12 23:50:54 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 12 Sep 2000 14:50:54 -0700
Subject: [Python-Dev] PyWX (Python AOLserver plugin)
In-Reply-To: <20000912120355.A2457@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Tue, Sep 12, 2000 at 12:03:55PM -0600
References: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER> <20000912120355.A2457@keymaster.enme.ucalgary.ca>
Message-ID: <20000912145053.B22138@lyra.org>

On Tue, Sep 12, 2000 at 12:03:55PM -0600, Neil Schemenauer wrote:
>...
> On Tue, Sep 12, 2000 at 10:40:36AM -0700, Brent Fulgham wrote:
>...
> > This is largely true, but we run across trouble with the way
> > the individual threads handle 'argv' variables and current
> > working directory.

Are you using Py_NewInterpreter? If so, then it will use the same argv
across all interpreters that it creates. Use PyInterpreterState_New, you
have finer-grained control of what goes into an interpreter/thread state
pair.

> > CGI scripts typically pass data as variables to the script
> > (as argv).  These (unfortunately) are changed globally across
> > all Python interpreter threads, which can cause problems....

They're sharing a list, I believe. See above.

This will definitely be true if you have a single interpreter and multiple
thread states.

> > In addition, the current working directory is not unique
> > among independent Python interpreters.  So if a script changes
> > its directory to something, all other running scripts (in
> > unique python interpreter threads) now have their cwd set to
> > this directory.

As pointed out elsewhere, this is a factor of the OS, not Python. And
Python's design really isn't going to attempt to address this (it really
doesn't make much sense to change these semantics).

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From fdrake at beopen.com  Tue Sep 12 23:51:09 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 12 Sep 2000 17:51:09 -0400 (EDT)
Subject: [Python-Dev] New Python 2.0 documentation packages
Message-ID: <14782.42317.633120.757620@cj42289-a.reston1.va.home.com>

  I've just released a new version of the documentation packages for
the Python 2.0 beta 1 release.  These are versioned 2.0b1.1 and dated
today.  These include a variety of small improvements and additions,
but the big deal is:

    The Module Index is back!

  Pick it up at your friendly Python headquarters:

    http://www.pythonlabs.com/tech/python2.0/


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From brent.fulgham at xpsystems.com  Tue Sep 12 23:55:10 2000
From: brent.fulgham at xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 14:55:10 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>

> There's no easy way to fix the current directory problem.  Just tell
> your CGI programmers that os.chdir() is off-limits; you may remove it
> from the os module (and from the posix module) during initialization
> of your interpreter to enforce this.
>

This is probably a good idea.
 
[ ... snip ... ]

> Are you *sure* you are using PyInterpreterState_New() and not just
> creating new threads?
>
Yes.
 
[ ... snip ... ]

> > Many of the problems we're running into because of this 
> > lack of interpreter isolation are due to the UNIX threading 
> > model, as I see it. 

Titus -- any chance s/UNIX/pthreads/ ?  I.e., would using something
like AOLserver's threading libraries help by providing more
thread-local storage in which to squirrel away various environment
data, dictionaries, etc.?

> > For example, the low-level file operation interference, 
> > cwd problems, and environment variable problems are all caused 
> > by UNIX's determination to share this stuff across threads.  
> > I don't see any way of changing this without causing far
> > more problems than we fix.
> 
> That's the whole point of using threads -- they share as much state as
> they can.  I don't see how you can do better without going to
> processes.  You could perhaps maintain the illusion of a per-thread
> current directory, but you'd have to modify every function that uses
> pathnames to take the simulated pwd into account...
> 

I think we just can't be all things to all people, which is a point
Michael has patiently been making this whole time.  I propose:

1.  We disable os.chdir in PyWX initialization.
2.  We assume "standard" CGI behavior of CGIDIR being a single 
directory that all CGI's share.
3.  We address sys.argv (is this just a bug on our part maybe?)
4.  Can we address the os.environ leak similarly?  I'm trying to 
think of cases where a CGI really should be allowed to add to
the environment.  Maybe someone needs to set an environment variable
used by some other program that will be run in a subshell.  If
so, maybe we can somehow serialize activities that modify os.environ
in this way?

Idea:  If Python forks a subshell, it inherits the parent
process's environment.  That's probably the only time we really want
to let someone modify the os.environ -- so it can be passed to
a child.  What if we serialized through the fork somehow like so:

1.  Python script wants to set environment, makes call to os.environ
1a. We serialize here, so now we are single-threaded
2.  Script forks a subshell.
2b. We remove the entry we just added and release mutex.
3.  Execution continues.

This probably still won't work because the script might now expect
these variables to be in the environment dictionary.

Perhaps we can dummy up a fake os.environ dictionary per interpreter
thread that doesn't actually change the true UNIX environment?

What do you guys think...

Thanks,

-Brent



From cgw at fnal.gov  Tue Sep 12 23:57:51 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 12 Sep 2000 16:57:51 -0500 (CDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <20000912165610.A554@kronos.cnri.reston.va.us>
References: <14782.31299.800325.803340@buffalo.fnal.gov>
	<200009122148.QAA01404@cj20424-a.reston1.va.home.com>
	<20000912165610.A554@kronos.cnri.reston.va.us>
Message-ID: <14782.42719.159114.708604@buffalo.fnal.gov>

Andrew Kuchling writes:
 > On Tue, Sep 12, 2000 at 04:48:47PM -0500, Guido van Rossum wrote:
 > >The rexec.py module needs to be fixed.  Should be simple enough.
 > >There may be other modules that it should allow too!
 > 
 > Are we sure that it's not possible to engineer segfaults or other
 > nastiness by deliberately feeding _sre bad data?  This was my primary
 > reason for not exposing the PCRE bytecode interface, since it would
 > have been difficult to make the code robust against hostile bytecodes.

If it used to be OK to "import re" in restricted mode, and now it
isn't, then this is an incompatible change and needs to be documented.
There are people running webservers and stuff who are counting on
being able to use the re module in restricted mode.




From brent.fulgham at xpsystems.com  Tue Sep 12 23:58:40 2000
From: brent.fulgham at xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 14:58:40 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45B23@THRESHER>

> > Are you *sure* you are using PyInterpreterState_New() and not just
> > creating new threads?
> >
> Yes.
>  
Hold on.  This may be our error.

And I'm taking this traffic off python-dev now.  Thanks for 
all the helpful comments!

Regards,

-Brent



From guido at beopen.com  Wed Sep 13 01:07:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 18:07:40 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Tue, 12 Sep 2000 14:55:10 MST."
             <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER> 
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER> 
Message-ID: <200009122307.SAA02146@cj20424-a.reston1.va.home.com>

> 3.  We address sys.argv (is this just a bug on our part maybe?)

Probably.  The variables are not shared -- thir initial values are the
same.

> 4.  Can we address the os.environ leak similarly?  I'm trying to 
> think of cases where a CGI really should be allowed to add to
> the environment.  Maybe someone needs to set an environment variable
> used by some other program that will be run in a subshell.  If
> so, maybe we can somehow serialize activities that modify os.environ
> in this way?

You each get a copy of os.environ.

Running things in subshells from threads is asking for trouble!

But if you have to, you can write your own os.system() substitute that
uses os.execve() -- this allows you to pass in the environment
explicitly.

You may have to take out (override) the code that automatically calls
os.putenv() when an assignment into os.environment is made.

> Idea:  If Python forks a subshell, it inherits the parent
> process's environment.  That's probably the only time we really want
> to let someone modify the os.environ -- so it can be passed to
> a child.  What if we serialized through the fork somehow like so:
> 
> 1.  Python script wants to set environment, makes call to os.environ
> 1a. We serialize here, so now we are single-threaded
> 2.  Script forks a subshell.
> 2b. We remove the entry we just added and release mutex.
> 3.  Execution continues.
> 
> This probably still won't work because the script might now expect
> these variables to be in the environment dictionary.
> 
> Perhaps we can dummy up a fake os.environ dictionary per interpreter
> thread that doesn't actually change the true UNIX environment?

See above.  You can do it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jcollins at pacificnet.net  Wed Sep 13 02:05:03 2000
From: jcollins at pacificnet.net (jcollins at pacificnet.net)
Date: Tue, 12 Sep 2000 17:05:03 -0700 (PDT)
Subject: [Python-Dev] New Python 2.0 documentation packages
In-Reply-To: <14782.42317.633120.757620@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009121659550.995-100000@euclid.endtech.com>

Could you also include the .info files?  I have tried unsuccessfully to
build the .info files in the distribution.  Here is the output from make:

<stuff deleted>
make[2]: Leaving directory `/home/collins/Python-2.0b1/Doc/html'
make[1]: Leaving directory `/home/collins/Python-2.0b1/Doc'
../tools/mkinfo ../html/api/api.html
perl -I/home/collins/Python-2.0b1/Doc/tools
/home/collins/Python-2.0b1/Doc/tools/html2texi.pl
/home/collins/Python-2.0b1/Doc/html/api/api.html
<CODE>
  "__all__"
Expected string content of <A> in <DT>: HTML::Element=HASH(0x8241fbc) at
/home/collins/Python-2.0b1/Doc/tools/html2texi.pl line 550.
make: *** [python-api.info] Error 255


Thanks,

Jeff



On Tue, 12 Sep 2000, Fred L. Drake, Jr. wrote:

> 
>   I've just released a new version of the documentation packages for
> the Python 2.0 beta 1 release.  These are versioned 2.0b1.1 and dated
> today.  These include a variety of small improvements and additions,
> but the big deal is:
> 
>     The Module Index is back!
> 
>   Pick it up at your friendly Python headquarters:
> 
>     http://www.pythonlabs.com/tech/python2.0/
> 
> 
>   -Fred
> 
> 




From greg at cosc.canterbury.ac.nz  Wed Sep 13 03:20:06 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 13 Sep 2000 13:20:06 +1200 (NZST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <006e01c01cf0$921a4da0$766940d5@hagrid>
Message-ID: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <effbot at telia.com>:

> "map(None, seq)" uses None to indicate that there are really
> no function to map things through.

This one is just as controversial as print>>None. I would
argue that it *doesn't* mean "no function", because that
doesn't make sense -- there always has to be *some* function.
It really means "use a default function which constructs
a tuple from its arguments".

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From mhagger at alum.mit.edu  Wed Sep 13 07:08:57 2000
From: mhagger at alum.mit.edu (Michael Haggerty)
Date: Wed, 13 Sep 2000 01:08:57 -0400 (EDT)
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
Message-ID: <14783.3049.364561.641240@freak.kaiserty.com>

Brent Fulgham writes:
> Titus -- any chance s/UNIX/pthreads/ ?  I.e., would using something
> like AOLserver's threading libraries help by providing more
> thread-local storage in which to squirrel away various environment
> data, dictionaries, etc.?

The problem isn't a lack of thread-local storage.  The problem is that
*everything* in unix assumes a single environment and a single PWD.
Of course we could emulate a complete unix-like virtual machine within
every thread :-)

> Idea:  If Python forks a subshell, it inherits the parent
> process's environment.  That's probably the only time we really want
> to let someone modify the os.environ -- so it can be passed to
> a child.

Let's set os.environ to a normal dict (i.e., break the connection to
the process's actual environment) initialized to the contents of the
environment.  This fake environment can be passed to a child using
execve.  We would have to override os.system() and its cousins to use
execve with this fake environment.

We only need to figure out:

1. Whether we can just assign a dict to os.environ (and
   posix.environ?) to kill their special behaviors;

2. Whether such changes can be made separately in each interpreter
   without them affecting one another;

3. Whether special measures have to be taken to cause the fake
   environment dictionary to be garbage collected when the interpreter
   is destroyed.

Regarding PWD there's nothing we can realistically do except document
this limitation and clobber os.chdir() as suggested by Guido.

Michael

--
Michael Haggerty
mhagger at alum.mit.edu



From just at letterror.com  Wed Sep 13 10:33:15 2000
From: just at letterror.com (Just van Rossum)
Date: Wed, 13 Sep 2000 09:33:15 +0100
Subject: [Python-Dev] Challenge about print >> None
Message-ID: <l03102802b5e4e70319fa@[193.78.237.174]>

Vladimir Marangozov wrote:
>And let me add to that the following summary: the whole extended
>print idea is about convenience. Convenience for those that know
>what file redirection is. Not for newbies. You can't argue too much
>about extended print as an intuitive concept for newbies.

That's exactly what disturbs me, too. The main reason for the extended
print statement is to make it easier for newbies to solve this problem "ok,
now how do I print to a file other than sys.stdout?". The main flaw in this
reasoning is that a newbie doesn't neccesarily realize that when you print
something to the screen it actually goes through a _file_ object, so is
unlikely to ask that question. Or the other way round: someone asking that
question can hardly be considered a newbie. It takes quite a bit of
learning before someone can make the step from "a file is a thing on my
hard drive that stores data" to "a file is an abstract stream object". And
once you've made that step you don't really need extended print statement
that badly anymore.

>The present
>change disturbs experienced users (the >> syntax aside) and you get
>signals about that from them, because the current behavior does not
>comply with any existing concept as far as file redirection is concerned.
>However, since these guys are experienced and knowledgable, they already
>understand this game quite well. So what you get is just "Oh really? OK,
>this is messy" from the chatty ones and everybody moves on.  The others
>just don't care, but they not necessarily agree.

Amen.

Just





From guido at beopen.com  Wed Sep 13 14:57:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 13 Sep 2000 07:57:03 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Wed, 13 Sep 2000 01:08:57 -0400."
             <14783.3049.364561.641240@freak.kaiserty.com> 
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>  
            <14783.3049.364561.641240@freak.kaiserty.com> 
Message-ID: <200009131257.HAA04051@cj20424-a.reston1.va.home.com>

> Let's set os.environ to a normal dict (i.e., break the connection to
> the process's actual environment) initialized to the contents of the
> environment.  This fake environment can be passed to a child using
> execve.  We would have to override os.system() and its cousins to use
> execve with this fake environment.
> 
> We only need to figure out:
> 
> 1. Whether we can just assign a dict to os.environ (and
>    posix.environ?) to kill their special behaviors;

You only need to assign to os.environ; posix.environ is not magic.

> 2. Whether such changes can be made separately in each interpreter
>    without them affecting one another;

Yes -- each interpreter (if you use NewInterpreter or whatever) has
its own copy of the os module.

> 3. Whether special measures have to be taken to cause the fake
>    environment dictionary to be garbage collected when the interpreter
>    is destroyed.

No.

> Regarding PWD there's nothing we can realistically do except document
> this limitation and clobber os.chdir() as suggested by Guido.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gvwilson at nevex.com  Wed Sep 13 14:58:58 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Wed, 13 Sep 2000 08:58:58 -0400 (EDT)
Subject: [Python-Dev] Academic Paper on Open Source
Message-ID: <Pine.LNX.4.10.10009130854520.2281-100000@akbar.nevex.com>

Yutaka Yamauchi has written an academic paper about Open Source
development methodology based in part on studying the GCC project:

http://www.lab7.kuis.kyoto-u.ac.jp/~yamauchi/papers/yamauchi_cscw2000.pdf

Readers of this list may find it interesting...

Greg
http://www.software-carpentry.com




From jack at oratrix.nl  Wed Sep 13 15:11:07 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 13 Sep 2000 15:11:07 +0200
Subject: [Python-Dev] Need some hands to debug MacPython installer 
In-Reply-To: Message by Charles G Waldman <cgw@fnal.gov> ,
	     Fri, 8 Sep 2000 18:41:12 -0500 (CDT) , <14777.31000.382351.905418@buffalo.fnal.gov> 
Message-ID: <20000913131108.2F151303181@snelboot.oratrix.nl>

Charles,
sorry, I didn't see your message until now. Could you give me some information 
on the configuration of the mac involved? Ideally the output of "Apple System 
Profiler", which will be in the Apple-menu if you have it. It appears, though, 
that you're running an old MacOS, in which case you may not have it. Then what 
I'd like to know is the machine type, OS version, amount of memory.

> I am not a Mac user but I saw your posting and my wife has a Mac so I
> decided to give it a try. 
> 
> When I ran the installer, a lot of the text referred to "Python 1.6"
> despite this being a 2.0 installer.
> 
> As the install completed I got a message:  
> 
>  The application "Configure Python" could not be opened because
>  "OTInetClientLib -- OTInetGetSecondaryAddresses" could not be found
> 
> After that, if I try to bring up PythonIDE or PythonInterpreter by
> clicking on the 16-ton icons, I get the same message about
> OTInetGetSecondaryAddresses.  So I'm not able to run Python at all
> right now on this Mac.
> 

--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From Vladimir.Marangozov at inrialpes.fr  Wed Sep 13 15:58:53 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Wed, 13 Sep 2000 15:58:53 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <l03102802b5e4e70319fa@[193.78.237.174]> from "Just van Rossum" at Sep 13, 2000 09:33:15 AM
Message-ID: <200009131358.PAA01096@python.inrialpes.fr>

Just van Rossum wrote:
> 
> Amen.
> 

The good thing is that we discussed this relatively in time. Like other
minor existing Python features, this one is probably going to die in
a dark corner due to the following conclusions:

1. print >> None generates multiple interpretations. It doesn't really
   matter which one is right or wrong. There is confusion. Face it.

2. For many users, "print >>None makes the '>>None' part disappear"
   is perceived as too magic and inconsistent in the face of general
   public knowledge on redirecting output. Honor that opinion.

3. Any specialization of None is bad. None == sys.stdout is no better
   than None == NullFile. A bug in users code may cause passing None
   which will dump the output to stdout, while it's meant to go into
   a file (say, a web log). This would be hard to catch and once this
   bites you, you'll start adding extra checks to make sure you're not
   passing None. (IOW, the same -1 on NullFile applies to sys.stdout)

A safe recommendation is to back this out and make it raise an exception.
No functionality of _extended_ print is lost.

whatever-the-outcome-is,-update-the-PEP'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From DavidA at ActiveState.com  Wed Sep 13 18:24:12 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 13 Sep 2000 09:24:12 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.WNT.4.21.0009130921340.1496-100000@loom>

On Wed, 13 Sep 2000, Greg Ewing wrote:

> Fredrik Lundh <effbot at telia.com>:
> 
> > "map(None, seq)" uses None to indicate that there are really
> > no function to map things through.
> 
> This one is just as controversial as print>>None. I would
> argue that it *doesn't* mean "no function", because that
> doesn't make sense -- there always has to be *some* function.
> It really means "use a default function which constructs
> a tuple from its arguments".

Agreed. To take another example which I also find 'warty', 

	string.split(foo, None, 3)

doesn't mean "use no separators" it means "use whitespace separators which
can't be defined in a single string".

Thus, FWIW, I'm -1 on the >>None construct.  I'll have a hard time
teaching it, and I'll recommend against using it (unless and until
convinced otherwise, of course).

--david




From titus at caltech.edu  Wed Sep 13 19:09:42 2000
From: titus at caltech.edu (Titus Brown)
Date: Wed, 13 Sep 2000 10:09:42 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>; from brent.fulgham@xpsystems.com on Tue, Sep 12, 2000 at 02:55:10PM -0700
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
Message-ID: <20000913100942.G10010@cns.caltech.edu>

-> > There's no easy way to fix the current directory problem.  Just tell
-> > your CGI programmers that os.chdir() is off-limits; you may remove it
-> > from the os module (and from the posix module) during initialization
-> > of your interpreter to enforce this.
-> >
-> 
-> This is probably a good idea.

Finally, he says it ;).

-> > Are you *sure* you are using PyInterpreterState_New() and not just
-> > creating new threads?
-> >
-> Yes.

We're using Py_NewInterpreter().  I don't know how much Brent has said
(I'm not on the python-dev mailing list, something I intend to remedy)
but we have two basic types of environment: new interpreter and reused
interpreter.

Everything starts off as a new interpreter, created using Py_NewInterpreter().
At the end of a Web request, a decision is made about "cleaning up" the
interpreter for re-use, vs. destroying it.

Interpreters are cleaned for reuse roughly as follows (using really ugly
C pseudo-code with error checking removed):

---

PyThreadState_Clear(thread_state);
PyDict_Clear(main_module_dict);

// Add builtin module

bimod = PyImport_ImportModule("__builtin__");
PyDict_SetItemString(maindict, "__builtins__", bimod);

---

Some time ago, I decided not to use PyInterpreterState_New() because it
seemed unnecessary; Py_NewInterpreter() did everything we wanted and nothing
more.  Looking at the code for 1.5.2, Py_NewInterpreter():

1) creates a new interpreter state;
2) creates the first thread state for that interpreter;
3) imports builtin and sys, and sys.modules modules;
4) sets the path;
5) initializes main, as we do above in the reuse part;
6) (optionally) does site initialization.

Since I think we want to do all of that, I don't see any problems.  It seems
like the sys.argv stuff is a problem with PyWX, not with Python inherently.

cheers,
--titus



From skip at mojam.com  Wed Sep 13 19:48:10 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 13 Sep 2000 12:48:10 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <Pine.WNT.4.21.0009130921340.1496-100000@loom>
References: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>
	<Pine.WNT.4.21.0009130921340.1496-100000@loom>
Message-ID: <14783.48602.639962.38233@beluga.mojam.com>

    David> Thus, FWIW, I'm -1 on the >>None construct.  I'll have a hard
    David> time teaching it, and I'll recommend against using it (unless and
    David> until convinced otherwise, of course).

I've only been following this thread with a few spare neurons.  Even so, I
really don't understand what all the fuss is about.  From the discussions
I've read on this subject, I'm confident the string "print >>None" will
never appear in an actual program.  Instead, it will be used the way Guido
envisioned:

    def write(arg, file=None):
	print >>file, arg

It will never be used in interactive sessions.  You'd just type "print arg"
or "print >>file, arg".  Programmers will never use the name "None" when
putting prints in their code.  They will write "print >>file" where file can
happen to take on the value None.  I doubt new users will even notice it, so
don't bother mentioning it when teaching about the print statement.

I'm sure David teaches people how to use classes without ever mentioning
that they can fiddle a class's __bases__ attribute.  That feature seems much
more subtle and a whole lot more dangerous than "print >> None", yet I hear
no complaints about it.

The __bases__ example occurred to me because I had occasion to use it for
the first time a few days ago.  I don't even know how long the language has
supported it (obviously at least since 1.5.2).  Worked like a charm.
Without it, I would have been stuck making a bunch of subclasses of
cgi.FormContentDict, all because I wanted each of the subclasses I used to
have a __delitem__ method.  What was an "Aha!" followed by about thirty
seconds of typing would have been a whole mess of fiddling without
modifiable __bases__ attributes.  Would I expect the readers of this list to
understand what I did?  In a flash.  Would I mention it to brand new Python
programmers?  Highly unlikely.

It's great to make sure Python is approachable for new users.  I believe we
need to also continue improve Python's power for more advanced users.  That
doesn't mean turning it into Perl, but it does occasionally mean adding
features to the language that new users won't need in their first class
assignment.

+1 from me.  If Guido likes it, that's cool.

Skip




From gward at python.net  Thu Sep 14 04:53:51 2000
From: gward at python.net (Greg Ward)
Date: Wed, 13 Sep 2000 22:53:51 -0400
Subject: [Python-Dev] Re: packaging Tkinter separately from core Python
In-Reply-To: <200009131247.HAA03938@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Sep 13, 2000 at 07:47:46AM -0500
References: <14782.59951.901752.674039@bitdiddle.concentric.net> <200009131247.HAA03938@cj20424-a.reston1.va.home.com>
Message-ID: <20000913225351.A862@beelzebub>

On 13 September 2000, Guido van Rossum said:
> Hm.  Would it be easier to have Tkinter.py and friends be part of the
> core distribution, and place only _tkinter and Tcl/Tk in the Tkinter
> RPM?

That seems unnecessarily complex.

> If that's not good, I would recommend installing as a subdir of
> site-packages, with a .pth file pointing to that subdir, e.g.:

And that seems nice.  ;-)

Much easier to get the Distutils to install a .pth file than to do evil
trickery to make it install into, eg., the standard library: just use
the 'extra_path' option.  Eg. in the NumPy setup script
(distutils/examples/numpy_setup.py):

    extra_path = 'Numeric'

means put everything into a directory "Numeric" and create
"Numeric.pth".  If you want different names, you have to make
'extra_path' a tuple:

    extra_path = ('tkinter', 'tkinter-lib')

should get your example setup:

>   site-packages/
>               tkinter.pth		".../site-packages/tkinter-lib"
> 		tkinter-lib/
> 			    _tkinter.so
> 			    Tkinter.py
> 			    Tkconstants.py
> 			    ...etc...

But it's been a while since this stuff was tested.

BTW, is there any good reason to call that directory "tkinter-lib"
instead of "tkinter"?  Is that the preferred convention for directories-
full-of-modules that are not packages?

        Greg
-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From martin at loewis.home.cs.tu-berlin.de  Thu Sep 14 08:53:56 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 14 Sep 2000 08:53:56 +0200
Subject: [Python-Dev] Integer Overflow
Message-ID: <200009140653.IAA01702@loewis.home.cs.tu-berlin.de>

With the current CVS, I get surprising results

Python 2.0b1 (#47, Sep 14 2000, 08:51:18) 
[GCC 2.95.2 19991024 (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> 1*1
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
OverflowError: integer multiplication

What is causing this exception?

Curious,
Martin



From tim_one at email.msn.com  Thu Sep 14 09:04:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:04:27 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009121411.QAA30848@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>

[Tim]
> sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim

[Vladimir Marangozov]
> ...
> I would have preferred arguments. The PEP and your responses lack them
> which is another sign about this feature.

I'll suggest as an alternative that we have an enormous amount of work to
complete for the 2.0 release, and continuing to argue about this isn't
perceived as a reasonable use of limited time.

I've tried it; I like it; anything I say beyond that would just be jerkoff
rationalizing of the conclusion I'm *condemned* to support by my own
pleasant experience with it.  Same with Guido.

We went over it again at a PythonLabs mtg today, and compared to the other
20 things on our agenda, when it popped up we all agreed "eh" after about a
minute.  It has supporters and detractors, the arguments are getting all of
more elaborate, extreme and repetitive with each iteration, and positions
are clearly frozen already.  That's what a BDFL is for.  He's seen all the
arguments; they haven't changed his mind; and, sorry, but it's a tempest in
a teapot regardless.

how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly
    y'rs  - tim





From tim_one at email.msn.com  Thu Sep 14 09:14:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:14:14 -0400
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <200009140653.IAA01702@loewis.home.cs.tu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>

Works for me (Windows).  Local corruption?  Compiler optimization error?
Config screwup?  Clobber everything and rebuild.  If still a problem, turn
off optimization and try again.  If still a problem, write up what you know
and enter SourceForge bug, marking it platform-specific.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Martin v. Loewis
> Sent: Thursday, September 14, 2000 2:54 AM
> To: python-dev at python.org
> Subject: [Python-Dev] Integer Overflow
>
>
> With the current CVS, I get surprising results
>
> Python 2.0b1 (#47, Sep 14 2000, 08:51:18)
> [GCC 2.95.2 19991024 (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> 1*1
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> OverflowError: integer multiplication
>
> What is causing this exception?
>
> Curious,
> Martin
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev





From martin at loewis.home.cs.tu-berlin.de  Thu Sep 14 09:32:26 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 14 Sep 2000 09:32:26 +0200
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>
Message-ID: <200009140732.JAA02739@loewis.home.cs.tu-berlin.de>

> Works for me (Windows).  Local corruption?  Compiler optimization error?
> Config screwup?

Config screwup. I simultaneously try glibc betas, and 2.1.93 manages
to define LONG_BIT as 64 (due to testing whether INT_MAX is 2147483647
at a time when INT_MAX is not yet defined). Shifting by LONG_BIT/2 is
then a no-op, so ah=a, bh=b in int_mul. gcc did warn about this, but I
ignored/forgot about the warning.

I reported that to the glibc people, and worked-around it locally.

Sorry for the confusion,

Martin



From tim_one at email.msn.com  Thu Sep 14 09:44:37 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:44:37 -0400
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <200009140732.JAA02739@loewis.home.cs.tu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEPHHFAA.tim_one@email.msn.com>

Glad you found it!  Note that the result of shifting a 32-bit integer *by*
32 isn't defined in C (gotta love it ...), so "no-op" was lucky.

> -----Original Message-----
> From: Martin v. Loewis [mailto:martin at loewis.home.cs.tu-berlin.de]
> Sent: Thursday, September 14, 2000 3:32 AM
> To: tim_one at email.msn.com
> Cc: python-dev at python.org
> Subject: Re: [Python-Dev] Integer Overflow
>
>
> > Works for me (Windows).  Local corruption?  Compiler optimization error?
> > Config screwup?
>
> Config screwup. I simultaneously try glibc betas, and 2.1.93 manages
> to define LONG_BIT as 64 (due to testing whether INT_MAX is 2147483647
> at a time when INT_MAX is not yet defined). Shifting by LONG_BIT/2 is
> then a no-op, so ah=a, bh=b in int_mul. gcc did warn about this, but I
> ignored/forgot about the warning.
>
> I reported that to the glibc people, and worked-around it locally.
>
> Sorry for the confusion,
>
> Martin





From Vladimir.Marangozov at inrialpes.fr  Thu Sep 14 11:40:37 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 14 Sep 2000 11:40:37 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> from "Tim Peters" at Sep 14, 2000 03:04:27 AM
Message-ID: <200009140940.LAA02556@python.inrialpes.fr>

Tim Peters wrote:
> 
> I'll suggest as an alternative that we have an enormous amount of work to
> complete for the 2.0 release, and continuing to argue about this isn't
> perceived as a reasonable use of limited time.

Fair enough, but I had no choice: this feature was imposed without prior
discussion and I saw it too late to take a stance. I've done my job.

> 
> I've tried it; I like it; anything I say beyond that would just be jerkoff
> rationalizing of the conclusion I'm *condemned* to support by my own
> pleasant experience with it.  Same with Guido.

Nobody is condemned when receptive. You're inflexibly persistent here.

Remove the feature, discuss it, try providing arguments so that we can
agree (or disagree), write the PEP including a summary of the discussion,
then decide and add the feature.

In this particular case, I find Guido's attitude regarding the "rules of
the game" (that you have fixed, btw, PEPs included) quite unpleasant.

I speak for myself. Guido has invited me here so that I could share
my opinions and experience easily and that's what I'm doing in my spare
cycles (no, your agenda is not mine so I won't look at the bug list).
If you think I'm doing more harm than good, no problem. I'd be happy
to decline his invitation and quit.

I'll be even more explit:

There are organizational bugs in the functioning of this micro-society
that would need to be fixed first, IMHO. Other signs about this have
been expressed in the past too. Nobody commented. Silence can't rule
forever. Note that I'm not writing arguments for my own pleasure or to
scratch my nose. My time is precious enough, just like yours.

> 
> We went over it again at a PythonLabs mtg today, and compared to the other
> 20 things on our agenda, when it popped up we all agreed "eh" after about a
> minute.  It has supporters and detractors, the arguments are getting all of
> more elaborate, extreme and repetitive with each iteration, and positions
> are clearly frozen already.  That's what a BDFL is for.  He's seen all the
> arguments; they haven't changed his mind; and, sorry, but it's a tempest in
> a teapot regardless.

Nevermind.

Open your eyes, though.

pre-release-pressure-can-do-more-harm-than-it-should'ly ly
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at mems-exchange.org  Thu Sep 14 15:03:28 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 14 Sep 2000 09:03:28 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Sep 11, 2000 at 09:27:10PM -0400
References: <200009112322.BAA29633@python.inrialpes.fr> <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>
Message-ID: <20000914090328.A31011@ludwig.cnri.reston.va.us>

On 11 September 2000, Tim Peters said:
> > So as long as one uses extended print, she's already an advanced user.
> 
> Nope!  "Now how did I get this to print to a file instead?" is one of the
> faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
> past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
> a file-like object?  what?! etc").

But that's only an argument for "print >>file"; it doesn't support
"print >>None" == "print >>sys.stdout" == "print" at all.

The only possible rationale I can see for that equivalence is in a
function that wraps print; it lets you get away with this:

    def my_print (string, file=None):
        print >> file, string

instead of this:

    def my_print (string, file=None):
        if file is None: file = sys.stdout
        print >> file, string

...which is *not* sufficient justification for the tortured syntax *and*
bizarre semantics.  I can live with the tortured ">>" syntax, but
coupled with the bizarre "None == sys.stdout" semantics, this is too
much.

Hmmm.  Reviewing my post, I think someone needs to decide what the
coding standard for ">>" is: "print >>file" or "print >> file"?  ;-)

        Greg



From gward at mems-exchange.org  Thu Sep 14 15:13:27 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 14 Sep 2000 09:13:27 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <20000914090328.A31011@ludwig.cnri.reston.va.us>; from gward@ludwig.cnri.reston.va.us on Thu, Sep 14, 2000 at 09:03:28AM -0400
References: <200009112322.BAA29633@python.inrialpes.fr> <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com> <20000914090328.A31011@ludwig.cnri.reston.va.us>
Message-ID: <20000914091326.B31011@ludwig.cnri.reston.va.us>

Oops.  Forgot to cast my votes:

+1 on redirectable print
-0 on the particular syntax chosen (not that it matters now)
-1 on None == sys.stdout (yes, I know it's more subtle than that,
      but that's just what it looks like)

IMHO "print >>None" should have the same effect as "print >>37" or
"print >>'foo'":

  ValueError: attempt to print to a non-file object

(as opposed to "print to file descriptor 37" and "open a file called
'foo' in append mode and write to it", of course.  ;-)

        Greg



From peter at schneider-kamp.de  Thu Sep 14 15:07:19 2000
From: peter at schneider-kamp.de (Peter Schneider-Kamp)
Date: Thu, 14 Sep 2000 15:07:19 +0200
Subject: [Python-Dev] Re: timeouts  (Was: checking an ip)
References: <SOLv5.8548$l6.467825@zwoll1.home.nl> <39BF9585.FC4C9CB1@schneider-kamp.de> <8po6ei$893$1@sunnews.cern.ch> <013601c01e1f$2f8dde60$978647c1@DEVELOPMENT>
Message-ID: <39C0CD87.396302EC@schneider-kamp.de>

I have proposed the inclusion of Timothy O'Malley's timeoutsocket.py
into the standard socket module on python-dev, but there has not been
a single reply in four weeks.

http://www.python.org/pipermail/python-dev/2000-August/015111.html

I think there are four possibilities:
1) add a timeoutsocket class to Lib/timeoutsocket.py
2) add a timeoutsocket class to Lib/socket.py
3) replace the socket class in Lib/socket.py
4) wait until the interval is down to one day

feedback-hungri-ly y'rs
Peter

Ulf Engstr?m schrieb:
> 
> I'm thinking this is something that should be put in the distro, since it
> seems a lot of people are asking for it all the time. I'm using select, but
> it'd be even better to have a proper timeout on all the socket stuff. Not to
> mention timeout on input and raw_input. (using select on those are platform
> dependant). Anyone has a solution to that?
> Are there any plans to put in timeouts? Can there be? :)
> Regards
> Ulf
> 
> > sigh...
> > and to be more precise, look at yesterday's post labelled
> > nntplib timeout bug?
> > interval between posts asking about timeout for sockets is already
> > down to 2 days.. great :-)
> 
> --
> http://www.python.org/mailman/listinfo/python-list



From garabik at atlas13.dnp.fmph.uniba.sk  Thu Sep 14 16:58:35 2000
From: garabik at atlas13.dnp.fmph.uniba.sk (Radovan Garabik)
Date: Thu, 14 Sep 2000 18:58:35 +0400
Subject: [Python-Dev] Re: [Fwd: Re: timeouts  (Was: checking an ip)]
In-Reply-To: <39C0D268.61F35DE8@schneider-kamp.de>; from peter@schneider-kamp.de on Thu, Sep 14, 2000 at 03:28:08PM +0200
References: <39C0D268.61F35DE8@schneider-kamp.de>
Message-ID: <20000914185835.A4080@melkor.dnp.fmph.uniba.sk>

On Thu, Sep 14, 2000 at 03:28:08PM +0200, Peter Schneider-Kamp wrote:
> 
> I have proposed the inclusion of Timothy O'Malley's timeoutsocket.py
> into the standard socket module on python-dev, but there has not been
> a single reply in four weeks.
> 
> http://www.python.org/pipermail/python-dev/2000-August/015111.html
> 
> I think there are four possibilities:
> 1) add a timeoutsocket class to Lib/timeoutsocket.py

why not, it won't break anything
but timeoutsocket.py needs a bit of "polishing" in this case
and some testing... I had some strange errors on WinNT
with timeout_socket (everything worked flawlessly on linux),
but unfortunately I am now away from that (or any other Winnt) computer 
and cannot do any tests.

> 2) add a timeoutsocket class to Lib/socket.py

possible

> 3) replace the socket class in Lib/socket.py

this could break some applications... especially
if you play with changing blocking/nonblocking status of socket
in them

> 4) wait until the interval is down to one day

5) add timeouts at the C level to socketmodule

this would be probably the right solution, but 
rather difficult to write.


and, of course, both timeout_socket and timeoutsocket
should be looked at rather closely. (I dismantled 
timeout_socket when I was hunting bugs in it, but have not
done it with timeoutsocket)


-- 
 -----------------------------------------------------------
| Radovan Garabik http://melkor.dnp.fmph.uniba.sk/~garabik/ |
| __..--^^^--..__    garabik @ melkor.dnp.fmph.uniba.sk     |
 -----------------------------------------------------------
Antivirus alert: file .signature infected by signature virus.
Hi! I'm a signature virus! Copy me into your signature file to help me spread!



From skip at mojam.com  Thu Sep 14 17:17:03 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 14 Sep 2000 10:17:03 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
References: <200009121411.QAA30848@python.inrialpes.fr>
	<LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
Message-ID: <14784.60399.893481.717232@beluga.mojam.com>

    Tim> how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly

I find the way python-bugs is working these days extremely bizarre.  Is it
resending a bug when there's some sort of change?  A few I've examined were
originally submitted in 1999.  Are they just now filtering out of jitterbug
or have they had some comment added that I don't see?

Skip




From paul at prescod.net  Thu Sep 14 17:28:14 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 14 Sep 2000 08:28:14 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
Message-ID: <39C0EE8E.770CAA17@prescod.net>

Tim Peters wrote:
> 
>...
> 
> We went over it again at a PythonLabs mtg today, and compared to the other
> 20 things on our agenda, when it popped up we all agreed "eh" after about a
> minute.  It has supporters and detractors, the arguments are getting all of
> more elaborate, extreme and repetitive with each iteration, and positions
> are clearly frozen already.  That's what a BDFL is for.  He's seen all the
> arguments; they haven't changed his mind; and, sorry, but it's a tempest in
> a teapot regardless.

All of the little hacks and special cases add up.

In the face of all of this confusion the safest thing would be to make
print >> None illegal and then figure it out for Python 2.1. 

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From jeremy at beopen.com  Thu Sep 14 17:38:56 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 11:38:56 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <14784.60399.893481.717232@beluga.mojam.com>
References: <200009121411.QAA30848@python.inrialpes.fr>
	<LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
	<14784.60399.893481.717232@beluga.mojam.com>
Message-ID: <14784.61712.512770.129447@bitdiddle.concentric.net>

>>>>> "SM" == Skip Montanaro <skip at mojam.com> writes:

  Tim> how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly

  SM> I find the way python-bugs is working these days extremely
  SM> bizarre.  Is it resending a bug when there's some sort of
  SM> change?  A few I've examined were originally submitted in 1999.
  SM> Are they just now filtering out of jitterbug or have they had
  SM> some comment added that I don't see?

Yes.  SF resends the entire bug report for every change to the bug.
If you change the priority for 5 to 4 or do anything else, it sends
mail.  It seems like too much mail to me, but better than no mail at
all.

Also note that the bugs list gets a copy of everything.  The submittor
and current assignee for each bug also get an email.

Jeremy



From jeremy at beopen.com  Thu Sep 14 17:48:50 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 11:48:50 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009140940.LAA02556@python.inrialpes.fr>
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
	<200009140940.LAA02556@python.inrialpes.fr>
Message-ID: <14784.62306.209688.587211@bitdiddle.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:

  VM> Remove the feature, discuss it, try providing arguments so that
  VM> we can agree (or disagree), write the PEP including a summary of
  VM> the discussion, then decide and add the feature.

The last step in the PEP process is for Guido to accept or reject a
PEP.  Since he is one of the primary advocates of the print >>None
behavior, I don't see why we should do what you suggest.  Presumably
Guido will continue to want the feature.

  VM> In this particular case, I find Guido's attitude regarding the
  VM> "rules of the game" (that you have fixed, btw, PEPs included)
  VM> quite unpleasant.

What is Guido's attitude?  What are the "rules of the game"?

  VM> I speak for myself. Guido has invited me here so that I could
  VM> share my opinions and experience easily and that's what I'm
  VM> doing in my spare cycles (no, your agenda is not mine so I won't
  VM> look at the bug list).  If you think I'm doing more harm than
  VM> good, no problem. I'd be happy to decline his invitation and
  VM> quit.

You're a valued member of this community.  We welcome your opinions
and experience.  It appears that in this case, Guido's opinions and
experience lead to a different conclusion that yours.  I am not
thrilled with the print >> None behavior myself, but I do not see the
value of pursuing the issue at length.

  VM> I'll be even more explit:

  VM> There are organizational bugs in the functioning of this
  VM> micro-society that would need to be fixed first, IMHO. Other
  VM> signs about this have been expressed in the past too. Nobody
  VM> commented. Silence can't rule forever. Note that I'm not writing
  VM> arguments for my own pleasure or to scratch my nose. My time is
  VM> precious enough, just like yours.

If I did not comment on early signs of organizational bugs, it was
probably because I did not see them.  We did a lot of hand-wringing
several months ago about the severage backlog in reviewing patches and
bugs.  We're making good progress on both the backlogs.  We also
formalized the design process for major language features.  Our
execution of that process hasn't been flawless, witness the features
in 2.0b1 that are still waiting for their PEPs to be written, but the
PEP process was instituted late in the 2.0 release process.

Jeremy



From effbot at telia.com  Thu Sep 14 18:05:05 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 18:05:05 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net>
Message-ID: <00c201c01e65$8d327bc0$766940d5@hagrid>

Paul wrote:
> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1.

Really?  So what's the next feature we'll have to take out after
some other python-dev member threatens to leave if he cannot
successfully force his ideas onto Guido and everyone else?

</F>

    "I'm really not a very nice person. I can say 'I don't care' with
    a straight face, and really mean it."
    -- Linus Torvalds, on why the B in BDFL really means "bastard"




From paul at prescod.net  Thu Sep 14 18:16:12 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 14 Sep 2000 09:16:12 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid>
Message-ID: <39C0F9CC.C9ECC35E@prescod.net>

Fredrik Lundh wrote:
> 
> Paul wrote:
> > In the face of all of this confusion the safest thing would be to make
> > print >> None illegal and then figure it out for Python 2.1.
> 
> Really?  So what's the next feature we'll have to take out after
> some other python-dev member threatens to leave if he cannot
> successfully force his ideas onto Guido and everyone else?

There have been several participants, all long-time Python users, who
have said that this None thing is weird. Greg Ward, who even likes
*Perl* said it is weird.

By my estimation there are more voices against then for and those that
are for are typically lukewarm ("I hated it at first but don't hate it
as much anymore"). Therefore I don't see any point in acting as if this
is single man's crusade.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From akuchlin at mems-exchange.org  Thu Sep 14 18:32:57 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 14 Sep 2000 12:32:57 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39C0F9CC.C9ECC35E@prescod.net>; from paul@prescod.net on Thu, Sep 14, 2000 at 09:16:12AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid> <39C0F9CC.C9ECC35E@prescod.net>
Message-ID: <20000914123257.C31741@kronos.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 09:16:12AM -0700, Paul Prescod wrote:
>By my estimation there are more voices against then for and those that
>are for are typically lukewarm ("I hated it at first but don't hate it
>as much anymore"). Therefore I don't see any point in acting as if this
>is single man's crusade.

Indeed.  On the other hand, this issue is minor enough that it's not
worth walking away from the community over; walk away if you no longer
use Python, or if it's not fun any more, or if the tenor of the
community changes.  Not because of one particular bad feature; GvR's
added bad features before, but we've survived.  

(I should be thankful, really, since the >>None feature means more
material for my Python warts page.)

--amk




From effbot at telia.com  Thu Sep 14 19:07:58 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 19:07:58 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid> <39C0F9CC.C9ECC35E@prescod.net>
Message-ID: <003a01c01e6e$56aa2180$766940d5@hagrid>

paul wrote:
> Therefore I don't see any point in acting as if this is single man's crusade.

really?  who else thinks that this little feature "shows that the rules
are fixed" and "my time is too precious to work on bug fixes" and "we're
here to vote, not to work" and "since my veto doesn't count, there are
organizational bugs". 

can we have a new mailing list, please?  one that's only dealing with
cool code, bug fixes, release administratrivia, etc.  practical stuff, not
ego problems.

</F>




From loewis at informatik.hu-berlin.de  Thu Sep 14 19:28:54 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Thu, 14 Sep 2000 19:28:54 +0200 (MET DST)
Subject: [Python-Dev] Re: [Python-Help] Bug in PyTuple_Resize
In-Reply-To: <200009141413.KAA21765@enkidu.stsci.edu> (delapena@stsci.edu)
References: <200009141413.KAA21765@enkidu.stsci.edu>
Message-ID: <200009141728.TAA04901@pandora.informatik.hu-berlin.de>

> Thank you for the response.  Unfortunately, I do not have the know-how at
> this time to solve this problem!  I did submit my original query and
> your response to the sourceforge bug tracking mechanism this morning.

I spent some time with this bug, and found that it is in some
unrelated code: the tuple resizing mechanism is is buggy if cyclic gc
is enabled. A patch is included below. [and in SF patch 101509]

It just happens that this code is rarely used: in _tkinter, when
filtering tuples, and when converting sequences to tuples. And even
then, the bug triggers on most systems only for _tkinter: the tuple
gets smaller in filter, so realloc(3C) returns the same adress;
tuple() normally succeeds in knowing the size in advance, so no resize
is necessary.

Regards,
Martin

Index: tupleobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/tupleobject.c,v
retrieving revision 2.44
diff -u -r2.44 tupleobject.c
--- tupleobject.c	2000/09/01 23:29:27	2.44
+++ tupleobject.c	2000/09/14 17:12:07
@@ -510,7 +510,7 @@
 		if (g == NULL) {
 			sv = NULL;
 		} else {
-			sv = (PyTupleObject *)PyObject_FROM_GC(g);
+			sv = (PyTupleObject *)PyObject_FROM_GC(sv);
 		}
 #else
 		sv = (PyTupleObject *)



From Vladimir.Marangozov at inrialpes.fr  Thu Sep 14 23:34:24 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 14 Sep 2000 16:34:24 -0500
Subject: [Python-Dev] See you later, folks!
Message-ID: <200009142134.QAA07143@cj20424-a.reston1.va.home.com>

[Vladimir asked me to post this due to python-dev mailing lists, and
to subsequently turn off his subscriptions.  Come back soon, Vladimir!
--Guido]

The time has come for me to leave you for some time. But rest assured,
not for the reasons you suspect <wink>. I'm in the process of changing
jobs & country. Big changes, that is.

So indeed, I'll unsubscribe from the python-dev list for a while and
indeed, I won't look at the bug list because I won't be able to, not
because I don't want to. (I won't be able to handle more patches for
that matter, sorry!)

Regarding the latest debate about extended print, things are surely
not so extreme as they sounded to Fredrik! So take it easy. I still
can sign with both hands what I've said, though, although you must
know that whenever I engage in the second round of a debate, I have
reasons to do so and my writing style becomes more pathetic, indeed.
But remember that python-dev is a place where educated opinions are being
confronted. The "bug" I referred to is that Guido, as the principal
proponent of a feature has not entered the second round of this debate
to defend it, despite the challenge I have formulated and subsequently
argued (I understand that he might have felt strange after reading my
posts). I apologize for my style if you feel that I should. I would
quit python-dev in the sense that if there are no more debates, I am
little to no interested in participating. That's what happens when,
for instance, Guido exercises his power prematurely which is not a
good thing, overall.

In short, I suddenly felt like I had to clarify this situation, secretly
knowing that Guido & Tim and everybody else (except Fredrik, but I
forgive him <wink>) understands the many points I've raised. This
debate would be my latest "contribution" for some time.

Last but not least, I must say that I deeply respect Guido & Tim and
everybody else (including Fredrik <wink>) for their knowledge and
positive attitude!  (Tim, I respect your fat ass too <wink> -- he does
a wonderful job on c.l.py!)

See you later!

knowledge-cannot-shrink!-it-can-only-extended-and-so-should-be-print'ly
truly-None-forbidding'ly y'rs
--
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Fri Sep 15 00:15:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 17:15:49 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Thu, 14 Sep 2000 08:28:14 MST."
             <39C0EE8E.770CAA17@prescod.net> 
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>  
            <39C0EE8E.770CAA17@prescod.net> 
Message-ID: <200009142215.RAA07332@cj20424-a.reston1.va.home.com>

> All of the little hacks and special cases add up.
> 
> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1. 

Sorry, no deal.  print>>file and print>>None are here to stay.

Paul, I don't see why you keep whining about this.  Sure, it's the
feature that everybody loves to hate.  But what's the big deal?  Get
over it.  I don't believe for a second that there is a trend that I've
stopped listening.  To the contrary, I've spent a great deal of time
reading to the arguments against this feature and its refinement, and
I simply fail to be convinced by the counter-arguments.

If this had been in the language from day one nobody would have
challenged it.  (And I've used my time machine to prove it, so don't
argue. :-)

If you believe I should no longer be the BDFL, say so, but please keep
it out of python-dev.  We're trying to get work done here.  You're an
employee of a valued member of the Python Consortium.  As such you can
request (through your boss) to be subscribed to the Consortium mailing
list.  Feel free to bring this up there -- there's not much else going
on there.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Thu Sep 14 23:28:33 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 17:28:33 -0400 (EDT)
Subject: [Python-Dev] Revised release schedule
Message-ID: <14785.17153.995000.379187@bitdiddle.concentric.net>

I just updated PEP 200 with some new details about the release
schedule.  These details are still open to some debate, but they need
to be resolved quickly.

I propose that we release 2.0 beta 2 on 26 Sep 2000.  That's one week
from this coming Tuesday.  This would be the final beta.  The final
release would be two weeks after that on 10 Oct 2000.

The feature freeze we imposed before the first beta is still in effect
(more or less).  We should only be adding new features when they fix
crucial bugs.  In order to allow time to prepare the release, all
changes should be made by the end of the day on Sunday, 24 Sep.

There is still a lot of work that remains to resolve open patches and
fix as many bugs as possible.  I have re-opened a number of patches
that were postponed prior to the 2.0b1 release.  It is not clear that
all of these patches should be accepted, but some of them may be
appropriate for inclusion now.  

There is also a large backlog of old bugs and a number of new bugs
from 2.0b1.  Obviously, we need to get these new bugs resolved and
make a dent in the old bugs.  I'll send a note later today with some
guidelines for bug triage.

Jeremy



From guido at beopen.com  Fri Sep 15 00:25:37 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 17:25:37 -0500
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 08:28:14 MST."
             <39C0EE8E.770CAA17@prescod.net> 
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>  
            <39C0EE8E.770CAA17@prescod.net> 
Message-ID: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>

> In the face of all of this confusion the safest thing would be to make
> [...] illegal and then figure it out for Python 2.1. 

Taking out controversial features is a good idea in some cases, in
order to prevent likely disasters.

I've heard that the xml support in 2.0b1 is broken, and that it's not
clear that it will be possible to fix it in time (the 2.0b1 release is
due in two weeks).  The best thing here seems to remove it and put it
back in 2.1 (due 3-6 months after 2.0).  In the mean time, the XML-sig
can release its own version.

The way I understand the situation right now is that there are two
packages claiming the name xml; one in the 2.0 core and one released
by the XML-sig.  While the original intent was for the XML-sig package
to be a superset of the core package, this doesn't appear to be
currently the case, even if the brokenness of the core xml package can
be fixed.

We absolutely cannot have a situation where there could be two
applications, one working only with the xml-sig's xml package, and the
other only with the 2.0 core xml package.  If at least one direction
of compatibility cannot be guaranteed, I propose that one of the
packages be renamed.  We can either rename the xml package to be
released with Python 2.0 to xmlcore, or we can rename the xml-sig's
xml package to xmlsig (or whatever they like).  (Then when in 2.1 the
issue is resolved, we can rename the compatible solution back to xml.)

Given that the xml-sig already has released packages called xml, the
best solution (and one which doesn't require the cooperation of the
xml-sig!) is to rename the 2.0 core xml package to xmlcore.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Thu Sep 14 23:28:22 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 17:28:22 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009140940.LAA02556@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEBCHGAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> Nobody is condemned when receptive. You're inflexibly persistent here.

I'm terse due to lack of both time for, and interest in, this issue.  I'm
persistent because Guido already ruled on this, has explicitly declined to
change his mind, and that's the way this language has always evolved.  Had
you hung around Python in the early days, there was often *no* discussion
about new features:  they just showed up by surprise.  Since that's how
lambda got in, maybe Guido started Python-Dev to oppose future mistakes like
that <wink>.

> Remove the feature, discuss it, try providing arguments so that we can
> agree (or disagree), write the PEP including a summary of the discussion,
> then decide and add the feature.

It was already very clear that that's what you want.  It should have been
equally clear that it's not what you're going to get on this one.  Take it
up with Guido if you must, but I'm out of it.

> In this particular case, I find Guido's attitude regarding the "rules of
> the game" (that you have fixed, btw, PEPs included) quite unpleasant.
>
> I speak for myself. Guido has invited me here so that I could share
> my opinions and experience easily and that's what I'm doing in my spare
> cycles (no, your agenda is not mine so I won't look at the bug list).

Then understand that my agenda is Guido's, and not only because he's my
boss.  Slashing the bug backlog *now* is something he believes is important
to Python's future, and evidently far more important to him than this
isolated little print gimmick.  It's also my recollection that he started
Python-Dev to get help on decisions that were important to him, not to
endure implacable opposition to every little thing he does.

If he debated every issue brought up on Python-Dev alone to the satisfaction
of just the people here, he would have time for nothing else.  That's the
truth.  As it is, he tells me he spends at least 2 hours every day just
*reading* Python-Dev, and I believe that, because I do too.  So long as this
is a dictatorship, I think it's impossible for people not to feel slighted
at times.  That's the way it's always been, and it's worked very well
despite that.

And I'll tell you something:  there is *nobody* in the history of Python who
has had more suggestions and "killer arguments" rejected by Guido than me.
I got over that in '93, though.  Play with him when you agree, back off when
he says "no".  That's what works.

> If you think I'm doing more harm than good, no problem. I'd be happy
> to decline his invitation and quit.

In general I think Guido believes your presence here is extremely helpful.
I know that I do.  On this particular issue, though, no, continuing to beat
on something after Guido says "case closed" isn't helpful.

> I'll be even more explit:
>
> There are organizational bugs in the functioning of this micro-society
> that would need to be fixed first, IMHO. Other signs about this have
> been expressed in the past too. Nobody commented.

People have been griping about the way Python is run since '91, so I'm not
buying the idea that this is something new.  The PEP process *is* something
new and has been of very mixed utility so far, but is particularly
handicapped at the start due to the need to record old decisions whose
*real* debates actually ended a long time ago.

I certainly agree that the way this particular gimmick got snuck in violated
"the rules", and if it were anyone other than Guido who did it I'd be
skinning them alive.  I figure he's entitled, though.  Don't you?

> Silence can't rule forever. Note that I'm not writing arguments for
> my own pleasure or to scratch my nose. My time is precious enough, just
> like yours.

Honestly, I don't know why you've taken your time to pursue this repeatedly.
Did Guido say something to suggest that he might change his mind?  I didn't
see it.

> ...
> Open your eyes, though.

I believe they're open, but that we're seeing different visions of how
Python *should* be run.

> pre-release-pressure-can-do-more-harm-than-it-should'ly ly

We've held a strict line on "bugfixes only" since 2.0b1 went out the door,
and I've indeed spent many an hour debating that with the feature-crazed
too.  The debates about all that, and all this, and the license mess, are
sucking my life away.  I still think we're doing a damned good job, though
<wink>.

over-and-out-ly y'rs  - tim





From tim_one at email.msn.com  Thu Sep 14 23:28:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 17:28:25 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39C0EE8E.770CAA17@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBCHGAA.tim_one@email.msn.com>

[Paul Prescod]
> All of the little hacks and special cases add up.

Yes, they add up to a wonderful language <0.9 wink>.

> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1.

There's no confusion in Guido's mind, though.

Well, not on this.  I'll tell you he's *real* confused about xml, though:
we're getting reports that the 2.0b1 version of the xml package is unusably
buggy.  If *that* doesn't get fixed, xml will get tossed out of 2.0final.
Fred Drake has volunteered to see what he can do about that, but it's
unclear whether he can make enough time to pursue it.





From effbot at telia.com  Thu Sep 14 23:46:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 23:46:11 +0200
Subject: [Python-Dev] Re: [Python-Help] Bug in PyTuple_Resize
References: <200009141413.KAA21765@enkidu.stsci.edu> <200009141728.TAA04901@pandora.informatik.hu-berlin.de>
Message-ID: <005201c01e95$3741e680$766940d5@hagrid>

martin wrote:
> I spent some time with this bug, and found that it is in some
> unrelated code: the tuple resizing mechanism is is buggy if cyclic gc
> is enabled. A patch is included below. [and in SF patch 101509]

wow, that was quick!

I've assigned the bug back to you.  go ahead and check
it in, and mark the bug as closed.

thanks /F




From akuchlin at mems-exchange.org  Thu Sep 14 23:47:19 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 14 Sep 2000 17:47:19 -0400
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 14, 2000 at 05:25:37PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com>
Message-ID: <20000914174719.A29499@kronos.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
>by the XML-sig.  While the original intent was for the XML-sig package
>to be a superset of the core package, this doesn't appear to be
>currently the case, even if the brokenness of the core xml package can
>be fixed.

I'd be more inclined to blame the XML-SIG package; the last public
release is quite elderly, and the CVS tree hasn't been updated to be a
superset of the xml/ package in the Python tree.  However, if you want
to drop the Lib/xml/ package from Python, I have no objections at all;
I never wanted it in the first place.

--amk




From effbot at telia.com  Fri Sep 15 00:16:32 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 15 Sep 2000 00:16:32 +0200
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
Message-ID: <000701c01e99$d0fac9a0$766940d5@hagrid>

http://www.upside.com/texis/mvm/story?id=39c10a5e0

</F>




From guido at beopen.com  Fri Sep 15 01:14:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 18:14:52 -0500
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 17:47:19 -0400."
             <20000914174719.A29499@kronos.cnri.reston.va.us> 
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com>  
            <20000914174719.A29499@kronos.cnri.reston.va.us> 
Message-ID: <200009142314.SAA08092@cj20424-a.reston1.va.home.com>

> On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
> >by the XML-sig.  While the original intent was for the XML-sig package
> >to be a superset of the core package, this doesn't appear to be
> >currently the case, even if the brokenness of the core xml package can
> >be fixed.
> 
> I'd be more inclined to blame the XML-SIG package; the last public
> release is quite elderly, and the CVS tree hasn't been updated to be a
> superset of the xml/ package in the Python tree.  However, if you want
> to drop the Lib/xml/ package from Python, I have no objections at all;
> I never wanted it in the first place.

It's easy to blame.  (Aren't you responsible for the XML-SIG releases? :-)

I can't say that I wanted the xml package either -- I thought that the
XML-SIG wanted it, and insisted that it be called 'xml', conflicting
with their own offering.  I'm not part of that group, and have no time
to participate in a discussion there or read their archives.  Somebody
please get their attention -- otherwise it *will* be removed from 2.0!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Fri Sep 15 00:42:00 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 18:42:00 -0400 (EDT)
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
In-Reply-To: <000701c01e99$d0fac9a0$766940d5@hagrid>
References: <000701c01e99$d0fac9a0$766940d5@hagrid>
Message-ID: <14785.21560.61961.86040@bitdiddle.concentric.net>

I like Python plenty, but Emacs is my favorite operating system.

Jeremy



From MarkH at ActiveState.com  Fri Sep 15 00:37:22 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 15 Sep 2000 09:37:22 +1100
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <20000914174719.A29499@kronos.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEGHDJAA.MarkH@ActiveState.com>

[Guido]
> On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
> >by the XML-sig.  While the original intent was for the XML-sig package
> >to be a superset of the core package, this doesn't appear to be
> >currently the case, even if the brokenness of the core xml package can
> >be fixed.

[Andrew]
> I'd be more inclined to blame the XML-SIG package;

Definately.  This XML stuff has cost me a number of hours a number of
times!  Always with other people's code, so I didnt know where to turn.

Now we find Guido saying things like:

> > the best solution (and one which doesn't require
> > the cooperation of the xml-sig!) is to rename
> > the 2.0 core xml package to xmlcore.

What is going on here?  We are forced to rename a core package, largely to
avoid the cooperation of, and avoid conflicting with, a SIG explicitly
setup to develop this core package in the first place!!!

How did this happen?  Does the XML SIG need to be shut down (while it still
can <wink>)?

> However, if you want to drop the Lib/xml/ package from
> Python, I have no objections at all; I never wanted it
> in the first place.

Agreed.  It must be dropped if it can not be fixed.  As it stands, an
application can make no assumptions about what xml works.

But IMO, the Python core has first grab at the name "xml" - if we can't get
the cooperation of the SIG, it should be their problem.  Where do we want
to be with respect to XML in a few years?  Surely not with some half-assed
"xmlcore" packge, and some extra "xml" package you still need to get
anything done...

Mark.




From prescod at prescod.net  Fri Sep 15 01:25:38 2000
From: prescod at prescod.net (Paul)
Date: Thu, 14 Sep 2000 18:25:38 -0500 (CDT)
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com>

On Thu, 14 Sep 2000, Guido van Rossum wrote:

> > In the face of all of this confusion the safest thing would be to make
> > [...] illegal and then figure it out for Python 2.1. 
> 
> Taking out controversial features is a good idea in some cases, in
> order to prevent likely disasters.
> 
> I've heard that the xml support in 2.0b1 is broken, and that it's not
> clear that it will be possible to fix it in time (the 2.0b1 release is
> due in two weeks).  The best thing here seems to remove it and put it
> back in 2.1 (due 3-6 months after 2.0).  In the mean time, the XML-sig
> can release its own version.

I've been productively using the 2.0 XML package. There are three main
modules in there: Expat -- which I believe is fine, SAX -- which is not
finished, and minidom -- which has a couple of very minor known bugs
relating to standards conformance.

If you are asking whether SAX can be fixed in time then the answer is "I
think so but it is out of my hands."  I contributed fixes to SAX this
morning and the remaining known issues are design issues. I'm not the
designer. If I were the designer I'd call it done, make a test suite and
go home.

Whether or not it is finished, I see no reason to hold up either minidom
or expat. There have been very few complaints about either.

> The way I understand the situation right now is that there are two
> packages claiming the name xml; one in the 2.0 core and one released
> by the XML-sig.  While the original intent was for the XML-sig package
> to be a superset of the core package, this doesn't appear to be
> currently the case, even if the brokenness of the core xml package can
> be fixed.

That's true. Martin V. Loewis has promised to look into this situation for
us. I believe he has a good understanding of the issues.

> We absolutely cannot have a situation where there could be two
> applications, one working only with the xml-sig's xml package, and the
> other only with the 2.0 core xml package.  If at least one direction
> of compatibility cannot be guaranteed, I propose that one of the
> packages be renamed.  We can either rename the xml package to be
> released with Python 2.0 to xmlcore, or we can rename the xml-sig's
> xml package to xmlsig (or whatever they like).  (Then when in 2.1 the
> issue is resolved, we can rename the compatible solution back to xml.)
> 
> Given that the xml-sig already has released packages called xml, the
> best solution (and one which doesn't require the cooperation of the
> xml-sig!) is to rename the 2.0 core xml package to xmlcore.

I think it would be unfortunate if the Python xml processing package be
named xmlcore for eternity. The whole point of putting it in the core is
that it should become more popular and ubiquitous than an add-on module.

I'd rather see Martin given an opportunity to look into it. If he hasn't
made progress in a week then we can rename one or the other.

 Paul





From prescod at prescod.net  Fri Sep 15 01:53:15 2000
From: prescod at prescod.net (Paul)
Date: Thu, 14 Sep 2000 18:53:15 -0500 (CDT)
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEGHDJAA.MarkH@ActiveState.com>
Message-ID: <Pine.LNX.4.21.0009141829330.25261-100000@amati.techno.com>

On Fri, 15 Sep 2000, Mark Hammond wrote:

> [Andrew]
> > I'd be more inclined to blame the XML-SIG package;
> 
> Definately.  This XML stuff has cost me a number of hours a number of
> times!  Always with other people's code, so I didnt know where to turn.

The XML SIG package is unstable. It's a grab bag. It's the cool stuff
people have been working on. I've said about a hundred times that it will
never get to version 1, will never be stable, will never be reliable
because that isn't how anyone views it. I don't see it as a flaw: it's the
place you go for cutting edge XML stuff. That's why Andrew and Guido are
dead wrong that we don't need Python as a package in the core. That's
where the stable stuff goes. Expat and Minidom are stable. IIRC, their
APIs have only changed in minor ways in the last year.

> What is going on here?  We are forced to rename a core package, largely to
> avoid the cooperation of, and avoid conflicting with, a SIG explicitly
> setup to develop this core package in the first place!!!
> 
> How did this happen?  Does the XML SIG need to be shut down (while it still
> can <wink>)?

It's not that anybody is not cooperating. Its that there are a small
number of people doing the actual work and they drop in and out of
availability based on their real life jobs. It isn't always, er, polite to
tell someone "get out of the way I'll do it myself." Despite the fact that
all the nasty hints are being dropped in my direction, nobody exercises a
BDFL position in the XML SIG. There's the central issue. Nobody imposes
deadlines, nobody says what features should go in or shouldn't and in what
form. If I tried to do so I would be rightfully slapped down.

> But IMO, the Python core has first grab at the name "xml" - if we can't get
> the cooperation of the SIG, it should be their problem.  Where do we want
> to be with respect to XML in a few years?  Surely not with some half-assed
> "xmlcore" packge, and some extra "xml" package you still need to get
> anything done...

It's easy to say that the core is important and the sig package is
secondary but 

 a) Guido says that they are both important
 b) The sig package has some users (at least a few) with running code

Nevertheless, I agree with you that in the long term we will wish we had
just used the name "xml" for the core package. I'm just pointing out that
it isn't as simple as it looks when you aren't involved.

 Paul Prescod




From prescod at prescod.net  Fri Sep 15 02:12:28 2000
From: prescod at prescod.net (Paul)
Date: Thu, 14 Sep 2000 19:12:28 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009142215.RAA07332@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com>

On Thu, 14 Sep 2000, Guido van Rossum wrote:
> ...
>
> Paul, I don't see why you keep whining about this. ...
> ...
> 
> If this had been in the language from day one nobody would have
> challenged it.  (And I've used my time machine to prove it, so don't
> argue. :-)

Well I still dislike "print" and map( None, ...) but yes, the societal bar
is much higher for change than for status quo. That's how the world works.

> If you believe I should no longer be the BDFL, say so, but please keep
> it out of python-dev.  We're trying to get work done here.  You're an
> employee of a valued member of the Python Consortium.  As such you can
> request (through your boss) to be subscribed to the Consortium mailing
> list.  Feel free to bring this up there -- there's not much else going
> on there.

What message are you replying to?

According to the archives, I've sent four messages since the beginning of
September. None of them suggest you are doing a bad job as BDFL (other
than being wrong on this particular issue).

 Paul Prescod





From trentm at ActiveState.com  Fri Sep 15 02:20:45 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 14 Sep 2000 17:20:45 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src configure.in,1.156,1.157 configure,1.146,1.147 config.h.in,2.72,2.73
In-Reply-To: <200009141547.IAA14881@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Thu, Sep 14, 2000 at 08:47:10AM -0700
References: <200009141547.IAA14881@slayer.i.sourceforge.net>
Message-ID: <20000914172045.E3038@ActiveState.com>

On Thu, Sep 14, 2000 at 08:47:10AM -0700, Fred L. Drake wrote:
> Update of /cvsroot/python/python/dist/src
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv14790
> 
> Modified Files:
> 	configure.in configure config.h.in 
> Log Message:
> 
> Allow configure to detect whether ndbm.h or gdbm/ndbm.h is installed.
> This allows dbmmodule.c to use either without having to add additional
> options to the Modules/Setup file or make source changes.
> 
> (At least some Linux systems use gdbm to emulate ndbm, but only install
> the ndbm.h header as /usr/include/gdbm/ndbm.h.)
>
> Index: configure.in
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/configure.in,v
> retrieving revision 1.156
> retrieving revision 1.157
> diff -C2 -r1.156 -r1.157
> *** configure.in	2000/09/08 02:17:14	1.156
> --- configure.in	2000/09/14 15:47:04	1.157
> ***************
> *** 372,376 ****
>   sys/audioio.h sys/file.h sys/lock.h db_185.h db.h \
>   sys/param.h sys/select.h sys/socket.h sys/time.h sys/times.h \
> ! sys/un.h sys/utsname.h sys/wait.h pty.h libutil.h)
>   AC_HEADER_DIRENT
>   
> --- 372,376 ----
>   sys/audioio.h sys/file.h sys/lock.h db_185.h db.h \
>   sys/param.h sys/select.h sys/socket.h sys/time.h sys/times.h \
> ! sys/un.h sys/utsname.h sys/wait.h pty.h libutil.h ndbm.h gdbm/ndbm.h)
>   AC_HEADER_DIRENT

Is this the correct fix? Previously I had been compiling the dbmmodule on
Debain and RedHat boxes using /usr/include/db1/ndbm.h (I had to change the
Setup.in line to include this directory. Now the configure test says that
ndbm.h does not exist and this patch (see below) to dbmmodule.c now won't
compile.



> Index: dbmmodule.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Modules/dbmmodule.c,v
> retrieving revision 2.22
> retrieving revision 2.23
> diff -C2 -r2.22 -r2.23
> *** dbmmodule.c   2000/09/01 23:29:26 2.22
> --- dbmmodule.c   2000/09/14 15:48:06 2.23
> ***************
> *** 8,12 ****
> --- 8,22 ----
>   #include <sys/stat.h>
>   #include <fcntl.h>
> +
> + /* Some Linux systems install gdbm/ndbm.h, but not ndbm.h.  This supports
> +  * whichever configure was able to locate.
> +  */
> + #if defined(HAVE_NDBM_H)
>   #include <ndbm.h>
> + #elif defined(HAVE_GDBM_NDBM_H)
> + #include <gdbm/ndbm.h>
> + #else
> + #error "No ndbm.h available!"
> + #endif
>
>   typedef struct {


-- 
Trent Mick
TrentM at ActiveState.com



From akuchlin at cnri.reston.va.us  Fri Sep 15 04:05:40 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 14 Sep 2000 22:05:40 -0400
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142314.SAA08092@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 14, 2000 at 06:14:52PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com> <20000914174719.A29499@kronos.cnri.reston.va.us> <200009142314.SAA08092@cj20424-a.reston1.va.home.com>
Message-ID: <20000914220540.A26196@newcnri.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 06:14:52PM -0500, Guido van Rossum wrote:
>It's easy to blame.  (Aren't you responsible for the XML-SIG releases? :-)

Correct; I wouldn't presume to flagellate someone else.

>I can't say that I wanted the xml package either -- I thought that the
>XML-SIG wanted it, and insisted that it be called 'xml', conflicting
>with their own offering.  I'm not part of that group, and have no time

Most of the XML-SIG does want it; I'm just not one of them.

--amk



From petrilli at amber.org  Fri Sep 15 04:29:35 2000
From: petrilli at amber.org (Christopher Petrilli)
Date: Thu, 14 Sep 2000 22:29:35 -0400
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
In-Reply-To: <14785.21560.61961.86040@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Sep 14, 2000 at 06:42:00PM -0400
References: <000701c01e99$d0fac9a0$766940d5@hagrid> <14785.21560.61961.86040@bitdiddle.concentric.net>
Message-ID: <20000914222935.A16149@trump.amber.org>

Jeremy Hylton [jeremy at beopen.com] wrote:
> I like Python plenty, but Emacs is my favorite operating system.

M-% operating system RET religion RET !

:-)
Chris
-- 
| Christopher Petrilli
| petrilli at amber.org



From moshez at math.huji.ac.il  Fri Sep 15 13:06:44 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 15 Sep 2000 14:06:44 +0300 (IDT)
Subject: [Python-Dev] Vacation
Message-ID: <Pine.GSO.4.10.10009151403560.23713-100000@sundial>

I'm going to be away from my e-mail from the 16th to the 23rd as I'm going
to be vacationing in the Netherlands. Please do not count on me to do
anything that needs to be done until the 24th. I currently have two
patches assigned to me which should be considered before b2, so if b2 is
before the 24th, please assign them to someone else.

Thanks in advance.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Fri Sep 15 14:40:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 07:40:52 -0500
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 18:25:38 EST."
             <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> 
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> 
Message-ID: <200009151240.HAA09833@cj20424-a.reston1.va.home.com>

[me]
> > Given that the xml-sig already has released packages called xml, the
> > best solution (and one which doesn't require the cooperation of the
> > xml-sig!) is to rename the 2.0 core xml package to xmlcore.
> 
> I think it would be unfortunate if the Python xml processing package be
> named xmlcore for eternity. The whole point of putting it in the core is
> that it should become more popular and ubiquitous than an add-on module.

I'm not proposing that it be called xmlcore for eternity, but I see a
*practical* problem with the 2.0 release: the xml-sig has a package
called 'xml' (and they've had dibs on the name for years!) which is
incompatible.  We can't force them to issue a new release under a
different name.  I don't want to break other people's code that
requires the xml-sig's xml package.

I propose the following:

We remove the '_xmlplus' feature.  It seems better not to rely on the
xml-sig to provide upgrades to the core xml package.  We're planning
2.1, 2.2, ... releases 3-6 months apart which should be quick enough
for most upgrade needs; we can issue service packs in between if
necessary.

*IF* (and that's still a big "if"!) the xml core support is stable
before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
stable, we remove it, but we'll consider it for 2.1.

In 2.1, presuming the XML-sig has released its own package under a
different name, we'll rename 'xmlcore' to 'xml' (keeping 'xmlcore' as
a backwards compatibility feature until 2.2).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep 15 14:46:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 07:46:30 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Thu, 14 Sep 2000 19:12:28 EST."
             <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com> 
References: <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com> 
Message-ID: <200009151246.HAA09902@cj20424-a.reston1.va.home.com>

> Well I still dislike "print" and map( None, ...) but yes, the societal bar
> is much higher for change than for status quo. That's how the world works.

Thanks.  You're getting over it just fine.  Don't worry!

> > If you believe I should no longer be the BDFL, say so, but please keep
> > it out of python-dev.  We're trying to get work done here.  You're an
> > employee of a valued member of the Python Consortium.  As such you can
> > request (through your boss) to be subscribed to the Consortium mailing
> > list.  Feel free to bring this up there -- there's not much else going
> > on there.
> 
> What message are you replying to?
> 
> According to the archives, I've sent four messages since the beginning of
> September. None of them suggest you are doing a bad job as BDFL (other
> than being wrong on this particular issue).

My apologies.  It must have been Vladimir's.  I was on the phone and
in meetings for most of the day and saw a whole slew of messages about
this issue.  Let's put this to rest -- I still have 50 more messages
to skim.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas.heller at ion-tof.com  Fri Sep 15 17:05:22 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 15 Sep 2000 17:05:22 +0200
Subject: [Python-Dev] Bug in 1.6 and 2.0b1 re?
Message-ID: <032a01c01f26$624a7900$4500a8c0@thomasnb>

[I posted this to the distutils mailing list, but have not yet
received an answer]

> This may not be directly related to distutils,
> it may also be a bug in 1.6 and 2.0b1 re implementation.
> 
> 'setup.py sdist' with the current distutils CVS version
> hangs while parsing MANIFEST.in,
> executing the re.sub command in these lines in text_file.py:
> 
>         # collapse internal whitespace (*after* joining lines!)
>         if self.collapse_ws:
>             line = re.sub (r'(\S)\s+(\S)', r'\1 \2', line)
> 
> 
> Has anyone else noticed this, or is something wrong on my side?
> 

[And a similar problem has been posted to c.l.p by vio]

> I believe there may be a RE bug in 2.0b1. Consider the following script:
> 
> #!/usr/bin/env python
> import re
> s = "red green blue"
> m = re.compile(r'green (\w+)', re.IGNORECASE)
> t = re.subn(m, r'matchedword \1 blah', s)
> print t
> 
> 
> When I run this on 1.5.2, I get the following expected output:
> 
> ('red matchedword blue blah', 1)
> 
> 
> If I run it on 2.0b1, python basically hangs.
> 

Thomas




From guido at beopen.com  Fri Sep 15 18:24:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 11:24:47 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: Your message of "Fri, 15 Sep 2000 08:14:54 MST."
             <200009151514.IAA26707@slayer.i.sourceforge.net> 
References: <200009151514.IAA26707@slayer.i.sourceforge.net> 
Message-ID: <200009151624.LAA10888@cj20424-a.reston1.va.home.com>

> --- 578,624 ----
>   
>       def load_string(self):
> !         rep = self.readline()[:-1]
> !         if not self._is_string_secure(rep):
> !             raise ValueError, "insecure string pickle"
> !         self.append(eval(rep,
>                            {'__builtins__': {}})) # Let's be careful
>       dispatch[STRING] = load_string
> + 
> +     def _is_string_secure(self, s):
> +         """Return true if s contains a string that is safe to eval
> + 
> +         The definition of secure string is based on the implementation
> +         in cPickle.  s is secure as long as it only contains a quoted
> +         string and optional trailing whitespace.
> +         """
> +         q = s[0]
> +         if q not in ("'", '"'):
> +             return 0
> +         # find the closing quote
> +         offset = 1
> +         i = None
> +         while 1:
> +             try:
> +                 i = s.index(q, offset)
> +             except ValueError:
> +                 # if there is an error the first time, there is no
> +                 # close quote
> +                 if offset == 1:
> +                     return 0
> +             if s[i-1] != '\\':
> +                 break
> +             # check to see if this one is escaped
> +             nslash = 0
> +             j = i - 1
> +             while j >= offset and s[j] == '\\':
> +                 j = j - 1
> +                 nslash = nslash + 1
> +             if nslash % 2 == 0:
> +                 break
> +             offset = i + 1
> +         for c in s[i+1:]:
> +             if ord(c) > 32:
> +                 return 0
> +         return 1
>   
>       def load_binstring(self):

Hm...  This seems to add a lot of work to a very common item in
pickles.

I had a different idea on how to make this safe from abuse: pass eval
a globals dict with an empty __builtins__ dict, as follows:
{'__builtins__': {}}.

Have you timed it?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep 15 18:29:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 11:29:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: Your message of "Fri, 15 Sep 2000 11:24:47 EST."
             <200009151624.LAA10888@cj20424-a.reston1.va.home.com> 
References: <200009151514.IAA26707@slayer.i.sourceforge.net>  
            <200009151624.LAA10888@cj20424-a.reston1.va.home.com> 
Message-ID: <200009151629.LAA10956@cj20424-a.reston1.va.home.com>

[I wrote]
> Hm...  This seems to add a lot of work to a very common item in
> pickles.
> 
> I had a different idea on how to make this safe from abuse: pass eval
> a globals dict with an empty __builtins__ dict, as follows:
> {'__builtins__': {}}.

I forgot that this is already how it's done.  But my point remains:
who says that this can cause security violations?  Sure, it can cause
unpickling to fail with an exception -- so can tons of other invalid
pickles.  But is it a security violation?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From trentm at ActiveState.com  Fri Sep 15 17:30:28 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 15 Sep 2000 08:30:28 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules structmodule.c,2.38,2.39
In-Reply-To: <200009150732.AAA08842@slayer.i.sourceforge.net>; from loewis@users.sourceforge.net on Fri, Sep 15, 2000 at 12:32:01AM -0700
References: <200009150732.AAA08842@slayer.i.sourceforge.net>
Message-ID: <20000915083028.D30529@ActiveState.com>

On Fri, Sep 15, 2000 at 12:32:01AM -0700, Martin v. L?wis wrote:
> Modified Files:
> 	structmodule.c 
> Log Message:
> Check range for bytes and shorts. Closes bug #110845.
> 
> 
> + 	if (x < -32768 || x > 32767){
> + 		PyErr_SetString(StructError,
> + 				"short format requires -32768<=number<=32767");
> + 		return -1;
> + 	}

Would it not be cleaner to use SHRT_MIN and SHRT_MAX (from limits.h I think)
here?

> + 	if (x < 0 || x > 65535){
> + 		PyErr_SetString(StructError,
> + 				"short format requires 0<=number<=65535");
> + 		return -1;
> + 	}
> + 	* (unsigned short *)p = (unsigned short)x;

And USHRT_MIN and USHRT_MAX here?


No biggie though.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Fri Sep 15 17:35:19 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 15 Sep 2000 08:35:19 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules structmodule.c,2.38,2.39
In-Reply-To: <20000915083028.D30529@ActiveState.com>; from trentm@ActiveState.com on Fri, Sep 15, 2000 at 08:30:28AM -0700
References: <200009150732.AAA08842@slayer.i.sourceforge.net> <20000915083028.D30529@ActiveState.com>
Message-ID: <20000915083519.E30529@ActiveState.com>

On Fri, Sep 15, 2000 at 08:30:28AM -0700, Trent Mick wrote:
> On Fri, Sep 15, 2000 at 12:32:01AM -0700, Martin v. L?wis wrote:
> > Modified Files:
> > 	structmodule.c 
> > Log Message:
> > Check range for bytes and shorts. Closes bug #110845.
> > 
> > 
> > + 	if (x < -32768 || x > 32767){
> > + 		PyErr_SetString(StructError,
> > + 				"short format requires -32768<=number<=32767");
> > + 		return -1;
> > + 	}
> 
> Would it not be cleaner to use SHRT_MIN and SHRT_MAX (from limits.h I think)
> here?
> 
> > + 	if (x < 0 || x > 65535){
> > + 		PyErr_SetString(StructError,
> > + 				"short format requires 0<=number<=65535");
> > + 		return -1;
> > + 	}
> > + 	* (unsigned short *)p = (unsigned short)x;
> 
> And USHRT_MIN and USHRT_MAX here?
> 


Heh, heh. I jump a bit quickly on that one. Three checkin messages later this
suggestion was applied. :) SOrry about that, Martin.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From paul at prescod.net  Fri Sep 15 18:02:40 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 15 Sep 2000 09:02:40 -0700
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> <200009151240.HAA09833@cj20424-a.reston1.va.home.com>
Message-ID: <39C24820.FB951E80@prescod.net>

Guido van Rossum wrote:
> 
> ...
> 
> I'm not proposing that it be called xmlcore for eternity, but I see a
> *practical* problem with the 2.0 release: the xml-sig has a package
> called 'xml' (and they've had dibs on the name for years!) which is
> incompatible.  We can't force them to issue a new release under a
> different name.  I don't want to break other people's code that
> requires the xml-sig's xml package.

Martin v. Loewis, Greg Stein and others think that they have a
backwards-compatible solution. You can decide whether to let Martin try
versus go the "xmlcore" route, or else you could delegate that decision
(to someone in particular, please!).

> I propose the following:
> 
> We remove the '_xmlplus' feature.  It seems better not to rely on the
> xml-sig to provide upgrades to the core xml package.  We're planning
> 2.1, 2.2, ... releases 3-6 months apart which should be quick enough
> for most upgrade needs; we can issue service packs in between if
> necessary.

I could live with this proposal but it isn't my decision. Are you
instructing the SIG to do this? Or are you suggesting I go back to the
SIG and start a discussion on it? What decision making procedure do you
advocate? Who is supposed to make this decision?

> *IF* (and that's still a big "if"!) the xml core support is stable
> before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
> stable, we remove it, but we'll consider it for 2.1.

We can easily have something stable within a few days from now. In fact,
all reported bugs are already fixed in patches that I will check in
today. There are no hard technical issues here.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From guido at beopen.com  Fri Sep 15 19:12:31 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 12:12:31 -0500
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Fri, 15 Sep 2000 09:02:40 MST."
             <39C24820.FB951E80@prescod.net> 
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> <200009151240.HAA09833@cj20424-a.reston1.va.home.com>  
            <39C24820.FB951E80@prescod.net> 
Message-ID: <200009151712.MAA13107@cj20424-a.reston1.va.home.com>

[me]
> > I'm not proposing that it be called xmlcore for eternity, but I see a
> > *practical* problem with the 2.0 release: the xml-sig has a package
> > called 'xml' (and they've had dibs on the name for years!) which is
> > incompatible.  We can't force them to issue a new release under a
> > different name.  I don't want to break other people's code that
> > requires the xml-sig's xml package.

[Paul]
> Martin v. Loewis, Greg Stein and others think that they have a
> backwards-compatible solution. You can decide whether to let Martin try
> versus go the "xmlcore" route, or else you could delegate that decision
> (to someone in particular, please!).

I will make the decision based on information gathered by Fred Drake.
You, Martin, Greg Stein and others have to get the information to him.

> > I propose the following:
> > 
> > We remove the '_xmlplus' feature.  It seems better not to rely on the
> > xml-sig to provide upgrades to the core xml package.  We're planning
> > 2.1, 2.2, ... releases 3-6 months apart which should be quick enough
> > for most upgrade needs; we can issue service packs in between if
> > necessary.
> 
> I could live with this proposal but it isn't my decision. Are you
> instructing the SIG to do this? Or are you suggesting I go back to the
> SIG and start a discussion on it? What decision making procedure do you
> advocate? Who is supposed to make this decision?

I feel that the XML-SIG isn't ready for action, so I'm making it easy
for them: they don't have to do anything.  Their package is called
'xml'.  The core package will be called something else.

> > *IF* (and that's still a big "if"!) the xml core support is stable
> > before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
> > stable, we remove it, but we'll consider it for 2.1.
> 
> We can easily have something stable within a few days from now. In fact,
> all reported bugs are already fixed in patches that I will check in
> today. There are no hard technical issues here.

Thanks.  This is a great help!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Fri Sep 15 18:54:17 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 15 Sep 2000 12:54:17 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: <200009151624.LAA10888@cj20424-a.reston1.va.home.com>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
	<200009151624.LAA10888@cj20424-a.reston1.va.home.com>
Message-ID: <14786.21561.493632.580653@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  GvR> Hm...  This seems to add a lot of work to a very common item in
  GvR> pickles.

  GvR> I had a different idea on how to make this safe from abuse:
  GvR> pass eval a globals dict with an empty __builtins__ dict, as
  GvR> follows: {'__builtins__': {}}.

  GvR> Have you timed it?

I just timed it with a few test cases, using strings from
/dev/urandom. 

1. pickle dictionary with 25 items, 10-byte keys, 20-bytes values
   0.1% slowdown

2. pickle dictionary with 25 items, 15-byte keys, 100-byte values
   1.5% slowdown

3. pickle 8k string
   0.6% slowdown

The performance impact seems minimal.  And, of course, pickle is
already incredibly slow compared to cPickle.

So it isn't slow, but is it necessary?  I didn't give it much thought;
merely saw the cPickle did these checks in addition to calling eval
with an empty builtins dict.

Jim-- Is there a reason you added the "insecure string pickle"
feature?

I can't think of anything in particular that would go wrong other than
bizarre exceptions, e.g. OverflowError, SyntaxError, etc.  It would be
possible to construct pickles that produced unexpected objects, like
an instance with an attribute that is an integer

    >>> x
    <__main__.Foo instance at 0x8140acc>
    >>> dir(x)
    [3, 'attr']

But there are so many other ways to produce weird objects using pickle
that this particular one does not seem to matter.

The only arguments I'm left with, which don't seem particularly
compelling, are:

1. Simplifies error checking for client, which can catch ValueError
   instead of multiplicity of errors
2. Compatibility with cPickle interface

Barring better ideas from Jim Fulton, it sounds like we should
probably remove the checks from both picklers.

Jeremy



From jeremy at beopen.com  Fri Sep 15 19:04:10 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 15 Sep 2000 13:04:10 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: <14786.21561.493632.580653@bitdiddle.concentric.net>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
	<200009151624.LAA10888@cj20424-a.reston1.va.home.com>
	<14786.21561.493632.580653@bitdiddle.concentric.net>
Message-ID: <14786.22154.794230.895070@bitdiddle.concentric.net>

I should have checked the revision history on cPickle before the last
post.  It says:

> revision 2.16
> date: 1997/12/08 15:15:16;  author: guido;  state: Exp;  lines: +50 -24
> Jim Fulton:
> 
>         - Loading non-binary string pickles checks for insecure
>           strings. This is needed because cPickle (still)
>           uses a restricted eval to parse non-binary string pickles.
>           This change is needed to prevent untrusted
>           pickles like::
> 
>             "S'hello world'*2000000\012p0\012."
> 
>           from hosing an application.
> 

So the justification seems to be that an attacker could easily consume
a lot of memory on a system and bog down an application if eval is
used to load the strings.  I imagine there are other ways to cause
trouble, but I don't see much harm in preventing this particular one.

Trying running this with the old pickle.  It locked my system up for a
good 30 seconds :-)

x = pickle.loads("S'hello world'*20000000\012p0\012.")

Jeremy



From jeremy at beopen.com  Sat Sep 16 00:27:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: 15 Sep 2000 18:27:15 -0400
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
Message-ID: <blhf7h1ebg.fsf@bitdiddle.concentric.net>

I was just reading comp.lang.python and saw an interesting question
that I couldn't answer.  Is anyone here game?

Jeremy
------- Start of forwarded message -------
From: Donn Cave <donn at u.washington.edu>
Newsgroups: comp.lang.python
Subject: sys.setdefaultencoding (2.0b1)
Date: 12 Sep 2000 22:11:31 GMT
Organization: University of Washington
Message-ID: <8pm9mj$3ie2$1 at nntp6.u.washington.edu>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1

I see codecs.c has gone to some trouble to defer character encoding
setup until it's actually required for something, but it's required
rather early in the process anyway when site.py calls
sys.setdefaultencoding("ascii")

If I strike that line from site.py, startup time goes down by about
a third.

Is that too simple a fix?  Does setdefaultencoding("ascii") do something
important?

	Donn Cave, donn at u.washington.edu
------- End of forwarded message -------



From guido at beopen.com  Sat Sep 16 01:31:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 18:31:52 -0500
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
In-Reply-To: Your message of "15 Sep 2000 18:27:15 -0400."
             <blhf7h1ebg.fsf@bitdiddle.concentric.net> 
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> 
Message-ID: <200009152331.SAA01300@cj20424-a.reston1.va.home.com>

> I was just reading comp.lang.python and saw an interesting question
> that I couldn't answer.  Is anyone here game?


From nascheme at enme.ucalgary.ca  Sat Sep 16 00:36:14 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 15 Sep 2000 16:36:14 -0600
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
In-Reply-To: <200009152331.SAA01300@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Fri, Sep 15, 2000 at 06:31:52PM -0500
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> <200009152331.SAA01300@cj20424-a.reston1.va.home.com>
Message-ID: <20000915163614.A7376@keymaster.enme.ucalgary.ca>

While we're optimizing the startup time, how about lazying loading the
LICENSE.txt file?

  Neil



From amk1 at erols.com  Sat Sep 16 03:10:30 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Fri, 15 Sep 2000 21:10:30 -0400
Subject: [Python-Dev] Problem with using _xmlplus
Message-ID: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>

The code in Lib/xml/__init__.py seems to be insufficient to completely
delegate matters to the _xmlplus package.  Consider this session with
'python -v':

Script started on Fri Sep 15 21:02:59 2000
[amk at 207-172-111-249 quotations]$ python -v
  ...
>>> from xml.sax import saxlib, saxexts
import xml # directory /usr/lib/python2.0/xml
import xml # precompiled from /usr/lib/python2.0/xml/__init__.pyc
import _xmlplus # directory /usr/lib/python2.0/site-packages/_xmlplus
import _xmlplus # from /usr/lib/python2.0/site-packages/_xmlplus/__init__.py
import xml.sax # directory /usr/lib/python2.0/site-packages/_xmlplus/sax
import xml.sax # from /usr/lib/python2.0/site-packages/_xmlplus/sax/__init__.py
import xml.sax.saxlib # from /usr/lib/python2.0/site-packages/_xmlplus/sax/saxlib.py
import xml.sax.saxexts # from /usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py
import imp # builtin

So far, so good.  Now try creating a parser.  This fails; I've hacked
the code slightly so it doesn't swallow the responsible ImportError:

>>> p=saxexts.XMLParserFactory.make_parser("xml.sax.drivers.drv_pyexpat")
import xml # directory /usr/lib/python2.0/xml
import xml # precompiled from /usr/lib/python2.0/xml/__init__.pyc
import sax # directory /usr/lib/python2.0/xml/sax
import sax # precompiled from /usr/lib/python2.0/xml/sax/__init__.pyc
import sax.handler # precompiled from /usr/lib/python2.0/xml/sax/handler.pyc
import sax.expatreader # precompiled from /usr/lib/python2.0/xml/sax/expatreader.pyc
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py", line 78, in make_parser
    info=rec_find_module(parser_name)
  File "/usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py", line 25, in rec_find_module
    lastmod=apply(imp.load_module,info)
  File "/usr/lib/python2.0/xml/sax/__init__.py", line 21, in ?
    from expatreader import ExpatParser
  File "/usr/lib/python2.0/xml/sax/expatreader.py", line 23, in ?
    from xml.sax import xmlreader
ImportError: cannot import name xmlreader

_xmlplus.sax.saxexts uses imp.find_module() and imp.load_module() to
load parser drives; it looks like those functions aren't looking at
sys.modules and therefore aren't being fooled by the sys.modules
hackery in Lib/xml/__init__.py, so the _xmlplus package isn't
completely overriding the xml/ package.

The guts of Python's import machinery have always been mysterious to
me; can anyone suggest how to fix this?

--amk



From guido at beopen.com  Sat Sep 16 04:06:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 21:06:28 -0500
Subject: [Python-Dev] Problem with using _xmlplus
In-Reply-To: Your message of "Fri, 15 Sep 2000 21:10:30 -0400."
             <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com> 
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com> 
Message-ID: <200009160206.VAA09344@cj20424-a.reston1.va.home.com>

[Andrew discovers that the _xmlplus hack is broken]

I have recently proposed a simple and robust fix: forget all import
hacking, and use a different name for the xml package in the core and
the xml package provided by PyXML.  I first suggested the name
'xmlcore' for the core xml package, but Martin von Loewis suggested a
better name: 'xmlbase'.

Since PyXML has had dibs on the 'xml' package name for years, it's
best not to try to change that.  We can't force everyone who has
installed an old version of PyXML to upgrade (and to erase the old
package!) so the best solution is to pick a new name for the core XML
support package.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Sat Sep 16 08:24:41 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 16 Sep 2000 08:24:41 +0200
Subject: [Python-Dev] Re: [XML-SIG] Problem with using _xmlplus
In-Reply-To: 	<E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
	(amk1@erols.com)
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
Message-ID: <200009160624.IAA00804@loewis.home.cs.tu-berlin.de>

> The guts of Python's import machinery have always been mysterious to
> me; can anyone suggest how to fix this?

I had a patch for some time on SF, waiting for approval,
(http://sourceforge.net/patch/?func=detailpatch&patch_id=101444&group_id=6473)
to fix that; I have now installed that patch.

Regards,
Martin



From larsga at garshol.priv.no  Sat Sep 16 12:26:34 2000
From: larsga at garshol.priv.no (Lars Marius Garshol)
Date: 16 Sep 2000 12:26:34 +0200
Subject: [XML-SIG] Re: [Python-Dev] Problem with using _xmlplus
In-Reply-To: <200009160206.VAA09344@cj20424-a.reston1.va.home.com>
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com> <200009160206.VAA09344@cj20424-a.reston1.va.home.com>
Message-ID: <m3lmwsy6n9.fsf@lambda.garshol.priv.no>

* Guido van Rossum
| 
| [suggests: the XML package in the Python core 'xmlbase']
| 
| Since PyXML has had dibs on the 'xml' package name for years, it's
| best not to try to change that.  We can't force everyone who has
| installed an old version of PyXML to upgrade (and to erase the old
| package!) so the best solution is to pick a new name for the core
| XML support package.

For what it's worth: I like this approach very much. It's simple,
intuitive and not likely to cause any problems.

--Lars M.




From mal at lemburg.com  Sat Sep 16 20:19:59 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 16 Sep 2000 20:19:59 +0200
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> <200009152331.SAA01300@cj20424-a.reston1.va.home.com>
Message-ID: <39C3B9CF.51441D94@lemburg.com>

Guido van Rossum wrote:
> 
> > I was just reading comp.lang.python and saw an interesting question
> > that I couldn't answer.  Is anyone here game?
> 
> >From reading the source code for unicodeobject.c, _PyUnicode_Init()
> sets the default to "ascii" anyway, to the call in site.py is quite
> unnecessary.  I think it's a good idea to remove it.  (Look around
> though -- there are some "if 0:" blocks that could make it necessary.
> Maybe the setdefaultencoding() call should be inside an "if 0:" block
> too.  With a comment.

Agreed. I'll fix this next week.

Some background: the first codec lookup done causes the encodings
package to be loaded which then registers the encodings package
codec search function. Then the 'ascii' codec is looked up
via the codec registry. All this takes time and should only
be done in case the code really uses codecs... (at least that
was the idea).

> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> > Jeremy
> > ------- Start of forwarded message -------
> > From: Donn Cave <donn at u.washington.edu>
> > Newsgroups: comp.lang.python
> > Subject: sys.setdefaultencoding (2.0b1)
> > Date: 12 Sep 2000 22:11:31 GMT
> > Organization: University of Washington
> > Message-ID: <8pm9mj$3ie2$1 at nntp6.u.washington.edu>
> > Mime-Version: 1.0
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > I see codecs.c has gone to some trouble to defer character encoding
> > setup until it's actually required for something, but it's required
> > rather early in the process anyway when site.py calls
> > sys.setdefaultencoding("ascii")
> >
> > If I strike that line from site.py, startup time goes down by about
> > a third.
> >
> > Is that too simple a fix?  Does setdefaultencoding("ascii") do something
> > important?
> >
> >       Donn Cave, donn at u.washington.edu
> > ------- End of forwarded message -------
> >
> > _______________________________________________
> > Python-Dev mailing list
> > Python-Dev at python.org
> > http://www.python.org/mailman/listinfo/python-dev
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
________________________________________________________________________
Business:                                        http://www.lemburg.com/
Python Pages:                             http://www.lemburg.com/python/



From fdrake at beopen.com  Sun Sep 17 00:10:19 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 16 Sep 2000 18:10:19 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.13,1.14
In-Reply-To: <200009162201.PAA21016@slayer.i.sourceforge.net>
References: <200009162201.PAA21016@slayer.i.sourceforge.net>
Message-ID: <14787.61387.996949.986311@cj42289-a.reston1.va.home.com>

Barry Warsaw writes:
 > Added request for cStringIO.StringIO.readlines() method.  Closes SF
 > bug #110686.

  I think the Patch Manager has a patch for this one, but I don't know
if its any good.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw at beopen.com  Sun Sep 17 00:38:46 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 16 Sep 2000 18:38:46 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.13,1.14
References: <200009162201.PAA21016@slayer.i.sourceforge.net>
	<14787.61387.996949.986311@cj42289-a.reston1.va.home.com>
Message-ID: <14787.63094.667182.915703@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    >> Added request for cStringIO.StringIO.readlines() method.
    >> Closes SF bug #110686.

    Fred>   I think the Patch Manager has a patch for this one, but I
    Fred> don't know if its any good.

It's patch #101423.  JimF, can you take a look and give a thumbs up or
down?  Or better yet, apply it to your canonical copy and send us an
update for the core.

http://sourceforge.net/patch/?func=detailpatch&patch_id=101423&group_id=5470

-Barry


From martin at loewis.home.cs.tu-berlin.de  Sun Sep 17 13:58:32 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 17 Sep 2000 13:58:32 +0200
Subject: [Python-Dev] [ Bug #110662 ] rfc822 (PR#358)
Message-ID: <200009171158.NAA01325@loewis.home.cs.tu-berlin.de>

Regarding your report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=110662&group_id=5470

I can't reproduce the problem. In 2.0b1, 

>>> s="Location: https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004\r\n\r\n" 
>>> t=rfc822.Message(cStringIO.StringIO(s)) 
>>> t['location'] 
'https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004' 

works fine for me. If the line break between Location: and the URL in
the original report was intentional, rfc822.Message is right in
rejecting the header: Continuation lines must start with white space.

I also cannot see how the patch could improve anything; proper
continuation lines are already supported. On what system did you
experience the problem?

If I misunderstood the report, please let me know.

Regards,
Martin


From trentm at ActiveState.com  Sun Sep 17 23:27:18 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 17 Sep 2000 14:27:18 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
Message-ID: <20000917142718.A25180@ActiveState.com>

I get the following error trying to import _tkinter in a Python 2.0 build:

> ./python
./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory


Here is the relevant section of my Modules/Setup:

_tkinter _tkinter.c tkappinit.c -DWITH_APPINIT \
    -I/usr/local/include \
    -I/usr/X11R6/include \
    -L/usr/local/lib \
    -ltk8.3 -ltcl8.3 \
    -L/usr/X11R6/lib \
    -lX11


I got the Tcl/Tk 8.3 source from dev.scriptics.com, and ran
  > ./configure --enable-gcc --enable-shared
  > make
  > make install   # as root
in the tcl and tk source directories.


The tcl and tk libs are in /usr/local/lib:

    [trentm at molotok contrib]$ ls -alF /usr/local/lib
    ...
    -r-xr-xr-x   1 root     root       579177 Sep 17 14:03 libtcl8.3.so*
    -rw-r--r--   1 root     root         1832 Sep 17 14:03 libtclstub8.3.a
    -r-xr-xr-x   1 root     root       778034 Sep 17 14:10 libtk8.3.so*
    -rw-r--r--   1 root     root         3302 Sep 17 14:10 libtkstub8.3.a
    drwxr-xr-x   8 root     root         4096 Sep 17 14:03 tcl8.3/
    -rw-r--r--   1 root     root         6722 Sep 17 14:03 tclConfig.sh
    drwxr-xr-x   4 root     root         4096 Sep 17 14:10 tk8.3/
    -rw-r--r--   1 root     root         3385 Sep 17 14:10 tkConfig.sh


Does anybody know what my problem is? Is the error from libtk8.3.so
complaining that it cannot load a library on which it depends? Is there some
system library dependency that I am likely missing?


Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com


From trentm at ActiveState.com  Sun Sep 17 23:46:14 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 17 Sep 2000 14:46:14 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <20000917142718.A25180@ActiveState.com>; from trentm@ActiveState.com on Sun, Sep 17, 2000 at 02:27:18PM -0700
References: <20000917142718.A25180@ActiveState.com>
Message-ID: <20000917144614.A25718@ActiveState.com>

On Sun, Sep 17, 2000 at 02:27:18PM -0700, Trent Mick wrote:
> 
> I get the following error trying to import _tkinter in a Python 2.0 build:
> 
> > ./python
> ./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory
> 

Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to /usr/local/lib)
and everything is hunky dory. I presumed that /usr/local/lib would be
on the default search path for shared libraries. Bad assumption I guess.

Trent


-- 
Trent Mick
TrentM at ActiveState.com


From martin at loewis.home.cs.tu-berlin.de  Mon Sep 18 08:59:33 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Mon, 18 Sep 2000 08:59:33 +0200
Subject: [Python-Dev] problems importing _tkinter on Linux build
Message-ID: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>

> I presumed that /usr/local/lib would be on the default search path
> for shared libraries. Bad assumption I guess.

On Linux, having /usr/local/lib in the search path is quite
common. The default search path is defined in /etc/ld.so.conf. What
distribution are you using? Perhaps somebody forgot to run
/sbin/ldconfig after installing the tcl library? Does tclsh find it?

Regards,
Martin



From jbearce at copeland.com  Mon Sep 18 13:22:36 2000
From: jbearce at copeland.com (jbearce at copeland.com)
Date: Mon, 18 Sep 2000 07:22:36 -0400
Subject: [Python-Dev] Re: [ Bug #110662 ] rfc822 (PR#358)
Message-ID: <OF66DA0B3D.234625E6-ON8525695E.003DFEEF@rsd.citistreet.org>

No, the line break wasn't intentional.  I ran into this problem on a stock
RedHat 6.2 (intel) system with python 1.5.2 reading pages from an iPlanet
Enterprise Server 4.1 on an NT box.  The patch I included fixed the problem
for me.  This was a consistent problem for me so I should be able to
reproduce the problem, and I send you any new info I can gather.  I'll also
try 2.0b1 with my script to see if it works.

Thanks,
Jim



                                                                                                                                
                    "Martin v. Loewis"                                                                                          
                    <martin at loewis.home.cs.tu-        To:     jbearce at copeland.com                                              
                    berlin.de>                        cc:     python-dev at python.org                                             
                                                      Subject:     [ Bug #110662 ] rfc822 (PR#358)                              
                    09/17/2000 07:58 AM                                                                                         
                                                                                                                                
                                                                                                                                




Regarding your report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=110662&group_id=5470

I can't reproduce the problem. In 2.0b1,

>>> s="Location:
https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004\r\n\r\n
"
>>> t=rfc822.Message(cStringIO.StringIO(s))
>>> t['location']
'https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004'


works fine for me. If the line break between Location: and the URL in
the original report was intentional, rfc822.Message is right in
rejecting the header: Continuation lines must start with white space.

I also cannot see how the patch could improve anything; proper
continuation lines are already supported. On what system did you
experience the problem?

If I misunderstood the report, please let me know.

Regards,
Martin





From bwarsaw at beopen.com  Mon Sep 18 15:35:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 18 Sep 2000 09:35:32 -0400 (EDT)
Subject: [Python-Dev] problems importing _tkinter on Linux build
References: <20000917142718.A25180@ActiveState.com>
	<20000917144614.A25718@ActiveState.com>
Message-ID: <14790.6692.908424.16235@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to
    TM> /usr/local/lib) and everything is hunky dory. I presumed that
    TM> /usr/local/lib would be on the default search path for shared
    TM> libraries. Bad assumption I guess.

Also, look at the -R flag to ld.  In my experience (primarily on
Solaris), any time you compiled with a -L flag you absolutely /had/ to
include a similar -R flag, otherwise you'd force all your users to set
LD_LIBRARY_PATH.

-Barry


From trentm at ActiveState.com  Mon Sep 18 18:39:04 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 18 Sep 2000 09:39:04 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <14790.6692.908424.16235@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 18, 2000 at 09:35:32AM -0400
References: <20000917142718.A25180@ActiveState.com> <20000917144614.A25718@ActiveState.com> <14790.6692.908424.16235@anthem.concentric.net>
Message-ID: <20000918093904.A23881@ActiveState.com>

On Mon, Sep 18, 2000 at 09:35:32AM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
> 
>     TM> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to
>     TM> /usr/local/lib) and everything is hunky dory. I presumed that
>     TM> /usr/local/lib would be on the default search path for shared
>     TM> libraries. Bad assumption I guess.
> 
> Also, look at the -R flag to ld.  In my experience (primarily on
> Solaris), any time you compiled with a -L flag you absolutely /had/ to
> include a similar -R flag, otherwise you'd force all your users to set
> LD_LIBRARY_PATH.
> 

Thanks, Barry. Reading about -R led me to -rpath, which works for me. Here is
the algorithm from the info docs:

`-rpath-link DIR'
     When using ELF or SunOS, one shared library may require another.
     This happens when an `ld -shared' link includes a shared library
     as one of the input files.

     When the linker encounters such a dependency when doing a
     non-shared, non-relocateable link, it will automatically try to
     locate the required shared library and include it in the link, if
     it is not included explicitly.  In such a case, the `-rpath-link'
     option specifies the first set of directories to search.  The
     `-rpath-link' option may specify a sequence of directory names
     either by specifying a list of names separated by colons, or by
     appearing multiple times.

     The linker uses the following search paths to locate required
     shared libraries.
       1. Any directories specified by `-rpath-link' options.

       2. Any directories specified by `-rpath' options.  The difference
          between `-rpath' and `-rpath-link' is that directories
          specified by `-rpath' options are included in the executable
          and used at runtime, whereas the `-rpath-link' option is only
          effective at link time.

       3. On an ELF system, if the `-rpath' and `rpath-link' options
          were not used, search the contents of the environment variable
          `LD_RUN_PATH'.

       4. On SunOS, if the `-rpath' option was not used, search any
          directories specified using `-L' options.

       5. For a native linker, the contents of the environment variable
          `LD_LIBRARY_PATH'.

       6. The default directories, normally `/lib' and `/usr/lib'.

     For the native ELF linker, as the last resort, the contents of
     /etc/ld.so.conf is used to build the set of directories to search.

     If the required shared library is not found, the linker will issue
     a warning and continue with the link.


Trent


-- 
Trent Mick
TrentM at ActiveState.com


From trentm at ActiveState.com  Mon Sep 18 18:42:51 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 18 Sep 2000 09:42:51 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>; from martin@loewis.home.cs.tu-berlin.de on Mon, Sep 18, 2000 at 08:59:33AM +0200
References: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>
Message-ID: <20000918094251.B23881@ActiveState.com>

On Mon, Sep 18, 2000 at 08:59:33AM +0200, Martin v. Loewis wrote:
> > I presumed that /usr/local/lib would be on the default search path
> > for shared libraries. Bad assumption I guess.
> 
> On Linux, having /usr/local/lib in the search path is quite
> common. The default search path is defined in /etc/ld.so.conf. What
> distribution are you using? Perhaps somebody forgot to run
> /sbin/ldconfig after installing the tcl library? Does tclsh find it?

Using RedHat 6.2


[trentm at molotok ~]$ cat /etc/ld.so.conf
/usr/X11R6/lib
/usr/i486-linux-libc5/lib


So no /usr/local/lib there. Barry's suggestion worked for me, though I think
I agree that /usr/local/lib is a reason include path.

Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com


From jeremy at beopen.com  Tue Sep 19 00:33:02 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 18 Sep 2000 18:33:02 -0400 (EDT)
Subject: [Python-Dev] guidelines for bug triage
Message-ID: <14790.38942.543387.233812@bitdiddle.concentric.net>

Last week I promised to post some guidelines on bug triage.  In the
interim, the number of open bugs has dropped by about 30.  We still
have 71 open bugs to deal with.  The goal is to get the number of open
bugs below 50 before the 2.0b2 release next week, so there is still a
lot to do.  So I've written up some general guidelines, which I'll
probably put in a PEP.

One thing that the guidelines lack are a list of people willing to
handle bug reports and their areas of expertise.  If people send me
email with that information, I'll include it in the PEP.

Jeremy


1. Make sure the bug category and bug group are correct.  If they are 
   correct, it is easier for someone interested in helping to find
   out, say, what all the open Tkinter bugs are.

2. If it's a minor feature request that you don't plan to address
   right away, add it to PEP 42 or ask the owner to add it for you.
   If you add the bug to PEP 42, mark the bug as "feature request",
   "later", and "closed"; and add a comment to the bug saying that
   this is the case (mentioning the PEP explicitly).

3. Assign the bug a reasonable priority.  We don't yet have a clear
   sense of what each priority should mean, except than 9 is highest
   and 1 is lowest.  One rule, however, is that bugs with priority
   seven or higher must be fixed before the next release.

4. If a bug report doesn't have enough information to allow you to
   reproduce or diagnose it, send email to the original submittor and
   ask for more information.  If the original report is really thin
   and your email doesn't get a response after a reasonable waiting
   period, you can close the bug.

5. If you fix a bug, mark the status as "Fixed" and close it.  In the
   comments, including the CVS revision numbers of the affected
   files.  In the CVS checkin message, include the SourceForge bug
   number *and* a normal description of the change.

6. If you are assigned a bug that you are unable to deal with assign
   it to someone else.  The guys at PythonLabs get paid to fix these
   bugs, so pick one of them if there is no other obvious candidate.



From barry at scottb.demon.co.uk  Tue Sep 19 00:28:46 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Mon, 18 Sep 2000 23:28:46 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <000001c021bf$cf081f20$060210ac@private>

I have managed to get all our critical python code up and
running under 2.0b1#4, around 15,000 lines. We use win32com
and wxPython extensions. The code drive SourceSafe and includes
a Web server that schedules builds for us.

The only problem I encounted was the problem of mixing string
and unicode types.

Using the smtplib I was passing in a unicode type as the body
of the message. The send() call hangs. I use encode() and all
is well.

Is this a user error in the use of smtplib or a bug?

I found that I had a lot of unicode floating around from win32com
that I was passing into wxPython. It checks for string and raises
exceptions. More use of encode() and we are up and running.

Is this what you expected when you added unicode?

		Barry



From barry at scottb.demon.co.uk  Tue Sep 19 00:43:59 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Mon, 18 Sep 2000 23:43:59 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>
Message-ID: <000201c021c1$ef71c7f0$060210ac@private>

At the risk of having my head bitten off again...

Why don't you tell people how to report bugs in python on the web site
or the documentation?

I'd expect this info in the docs and on the web site for python.

	BArry



From guido at beopen.com  Tue Sep 19 01:45:12 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 18:45:12 -0500
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: Your message of "Mon, 18 Sep 2000 23:28:46 +0100."
             <000001c021bf$cf081f20$060210ac@private> 
References: <000001c021bf$cf081f20$060210ac@private> 
Message-ID: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>

> I have managed to get all our critical python code up and
> running under 2.0b1#4, around 15,000 lines. We use win32com
> and wxPython extensions. The code drive SourceSafe and includes
> a Web server that schedules builds for us.
> 
> The only problem I encounted was the problem of mixing string
> and unicode types.
> 
> Using the smtplib I was passing in a unicode type as the body
> of the message. The send() call hangs. I use encode() and all
> is well.
> 
> Is this a user error in the use of smtplib or a bug?
> 
> I found that I had a lot of unicode floating around from win32com
> that I was passing into wxPython. It checks for string and raises
> exceptions. More use of encode() and we are up and running.
> 
> Is this what you expected when you added unicode?

Barry, I'm unclear on what exactly is happening.  Where does the
Unicode come from?  You implied that your code worked under 1.5.2,
which doesn't support Unicode.  How can code that works under 1.5.2
suddenly start producing Unicode strings?  Unless you're now applying
the existing code to new (Unicode) input data -- in which case, yes,
we expect that fixes are sometimes needed.

The smtplib problem may be easily explained -- AFAIK, the SMTP
protocol doesn't support Unicode, and the module isn't Unicode-aware,
so it is probably writing garbage to the socket.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido at beopen.com  Tue Sep 19 01:51:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 18:51:26 -0500
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: Your message of "Mon, 18 Sep 2000 23:43:59 +0100."
             <000201c021c1$ef71c7f0$060210ac@private> 
References: <000201c021c1$ef71c7f0$060210ac@private> 
Message-ID: <200009182351.SAA03195@cj20424-a.reston1.va.home.com>

> At the risk of having my head bitten off again...

Don't worry, it's only a virtual bite... :-)

> Why don't you tell people how to report bugs in python on the web site
> or the documentation?
> 
> I'd expect this info in the docs and on the web site for python.

In the README file:

    Bug reports
    -----------

    To report or search for bugs, please use the Python Bug
    Tracker at http://sourceforge.net/bugs/?group_id=5470.

But I agree that nobody reads the README file any more.  So yes, it
should be added to the website.  I don't think it belongs in the
documentation pack, although Fred may disagree (where should it be
added?).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From barry at scottb.demon.co.uk  Tue Sep 19 01:00:13 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 00:00:13 +0100
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <000701c021c4$3412d550$060210ac@private>

There needs to be a set of benchmarks that can be used to test the effect
of any changes. Is there a set that exist already that can be used?

		Barry


> Behalf Of Vladimir Marangozov
> 
> Continuing my impressions on the user's feedback to date: Donn Cave
> & MAL are at least two voices I've heard about an overall slowdown
> of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
> this slowdown comes from and I believe that we have only vague guesses
> about the possible causes: unicode database, more opcodes in ceval, etc.
> 
> I wonder whether we are in a position to try improving Python's
> performance with some `wise quickies' in a next beta. But this raises
> a more fundamental question on what is our margin for manoeuvres at this
> point. This in turn implies that we need some classification of the
> proposed optimizations to date.
> 
> Perhaps it would be good to create a dedicated Web page for this, but
> in the meantime, let's try to build a list/table of the ideas that have
> been proposed so far. This would be useful anyway, and the list would be
> filled as time goes.
> 
> Trying to push this initiative one step further, here's a very rough start
> on the top of my head:
> 
> Category 1: Algorithmic Changes
> 
> These are the most promising, since they don't relate to pure technicalities
> but imply potential improvements with some evidence.
> I'd put in this category:
> 
> - the dynamic dictionary/string specialization by Fred Drake
>   (this is already in). Can this be applied in other areas? If so, where?
> 
> - the Python-specific mallocs. Actually, I'm pretty sure that a lot of
>   `overhead' is due to the standard mallocs which happen to be expensive
>   for Python in both space and time. Python is very malloc-intensive.
>   The only reason I've postponed my obmalloc patch is that I still haven't
>   provided an interface which allows evaluating it's impact on the
>   mem size consumption. It gives noticeable speedup on all machines, so
>   it accounts as a good candidate w.r.t. performance.
> 
> - ??? (maybe some parts of MAL's optimizations could go here)
> 
> Category 2: Technical / Code optimizations
> 
> This category includes all (more or less) controversial proposals, like
> 
> - my latest lookdict optimizations (a typical controversial `quickie')
> 
> - opcode folding & reordering. Actually, I'm unclear on why Guido
>   postponed the reordering idea; it has received positive feedback
>   and all theoretical reasoning and practical experiments showed that
>   this "could" help, although without any guarantees. Nobody reported
>   slowdowns, though. This is typically a change without real dangers.
> 
> - kill the async / pending calls logic. (Tim, what happened with this
>   proposal?)
> 
> - compact the unicodedata database, which is expected to reduce the
>   mem footprint, maybe improve startup time, etc. (ongoing)
> 
> - proposal about optimizing the "file hits" on startup.
> 
> - others?
> 
> If there are potential `wise quickies', meybe it's good to refresh
> them now and experiment a bit more before the final release?
> 
> -- 
>        Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
> http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From MarkH at ActiveState.com  Tue Sep 19 01:18:18 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 19 Sep 2000 10:18:18 +1100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEPDDJAA.MarkH@ActiveState.com>

[Guido]

> Barry, I'm unclear on what exactly is happening.  Where does the
> Unicode come from?  You implied that your code worked under 1.5.2,
> which doesn't support Unicode.  How can code that works under 1.5.2
> suddenly start producing Unicode strings?  Unless you're now applying
> the existing code to new (Unicode) input data -- in which case, yes,
> we expect that fixes are sometimes needed.

My guess is that the Unicode strings are coming from COM.  In 1.5, we used
the Win32 specific Unicode object, and win32com did lots of explicit
str()s - the user of the end object usually saw real Python strings.

For 1.6 and later, I changed this, so that real Python Unicode objects are
used and returned instead of the strings.  I figured this would be a good
test for Unicode integration, as Unicode and strings are ultimately
supposed to be interchangeable ;-)

win32com.client.__init__ starts with:

NeedUnicodeConversions = not hasattr(__builtin__, "unicode")

This forces the flag "true" 1.5, and false otherwise.  Barry can force it
to "true", and win32com will always force a str() over all Unicode objects.

However, this will _still_ break in a few cases (and I have had some
reported).  str() of a Unicode object can often raise that ugly "char out
of range" error.  As Barry notes, the code would have to change to do an
"encode('mbcs')" to be safe anyway...

But regardless of where Barry's Unicode objects come from, his point
remains open.  Do we consider the library's lack of Unicode awareness a
bug, or do we drop any pretence of string and unicode objects being
interchangeable?

As a related issue, do we consider that str(unicode_ob) often fails is a
problem?  The users on c.l.py appear to...

Mark.



From gward at mems-exchange.org  Tue Sep 19 01:29:00 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 18 Sep 2000 19:29:00 -0400
Subject: [Python-Dev] Speaking of bug triage...
Message-ID: <20000918192859.A12253@ludwig.cnri.reston.va.us>

... just what are the different categories supposed to mean?
Specifically, what's the difference between "Library" and "Modules"?

The library-related open bugs in the "Library" category cover the
following modules:
  * anydbm
  * rfc822 (several!)
  * mimedecode
  * urlparse
  * cmath
  * CGIHTTPServer

And in the "Modules" category we have:
  * mailbox
  * socket/os
  * re/sre (several)
  * anydbm
  * xml/_xmlplus
  * cgi/xml

Hmmm... looks to me like there's no difference between "Library" and
"Modules" -- heck, I could have guessed that just from looking at the
names.  The library *is* modules!

Was this perhaps meant to be a distinction between pure Python and
extension modules?

        Greg


From jeremy at beopen.com  Tue Sep 19 01:36:41 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 18 Sep 2000 19:36:41 -0400 (EDT)
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <20000918192859.A12253@ludwig.cnri.reston.va.us>
References: <20000918192859.A12253@ludwig.cnri.reston.va.us>
Message-ID: <14790.42761.418440.578432@bitdiddle.concentric.net>

>>>>> "GW" == Greg Ward <gward at mems-exchange.org> writes:

  GW> Was this perhaps meant to be a distinction between pure Python
  GW> and extension modules?

That's right -- Library == ".py" and Modules == ".c".  Perhaps not the
best names, but they're short.

Jeremy


From tim_one at email.msn.com  Tue Sep 19 01:34:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 18 Sep 2000 19:34:30 -0400
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <20000918192859.A12253@ludwig.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>

[Greg Ward]
> ... just what are the different categories supposed to mean?
> Specifically, what's the difference between "Library" and "Modules"?

Nobody knows.  I've been using Library for .py files under Lib/, and Modules
for anything written in C whose name works in an "import".  Other people are
doing other things, but they're wrong <wink>.




From guido at beopen.com  Tue Sep 19 02:43:17 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 19:43:17 -0500
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: Your message of "Mon, 18 Sep 2000 19:36:41 -0400."
             <14790.42761.418440.578432@bitdiddle.concentric.net> 
References: <20000918192859.A12253@ludwig.cnri.reston.va.us>  
            <14790.42761.418440.578432@bitdiddle.concentric.net> 
Message-ID: <200009190043.TAA06331@cj20424-a.reston1.va.home.com>

>   GW> Was this perhaps meant to be a distinction between pure Python
>   GW> and extension modules?
> 
> That's right -- Library == ".py" and Modules == ".c".  Perhaps not the
> best names, but they're short.

Think "subdirectories in the source tree" and you'll never make a
mistake again.  (For this particular choice. :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From barry at scottb.demon.co.uk  Tue Sep 19 01:43:25 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 00:43:25 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>
Message-ID: <000801c021ca$3c9daa50$060210ac@private>

Mark's Python COM code is the source of unicode. I'm guessing that the old
1.5.2 support coerced to string and now that unicode is around Mark's
code gives me unicode strings. Our app is driving Microsoft visual
SourceSafe thru COM.

The offending line that upgraded all strings to unicode that broke mail:

file.write( 'Crit: Searching for new and changed files since label %s\n' % previous_source_label )

previous_source_label is unicode from a call to a COM object.

file is a StringIO object.

		Barry

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Guido van Rossum
> Sent: 19 September 2000 00:45
> To: Barry Scott
> Cc: PythonDev
> Subject: Re: [Python-Dev] Python 1.5.2 modules need porting to 2.0
> because of unicode - comments please
> 
> 
> > I have managed to get all our critical python code up and
> > running under 2.0b1#4, around 15,000 lines. We use win32com
> > and wxPython extensions. The code drive SourceSafe and includes
> > a Web server that schedules builds for us.
> > 
> > The only problem I encounted was the problem of mixing string
> > and unicode types.
> > 
> > Using the smtplib I was passing in a unicode type as the body
> > of the message. The send() call hangs. I use encode() and all
> > is well.
> > 
> > Is this a user error in the use of smtplib or a bug?
> > 
> > I found that I had a lot of unicode floating around from win32com
> > that I was passing into wxPython. It checks for string and raises
> > exceptions. More use of encode() and we are up and running.
> > 
> > Is this what you expected when you added unicode?
> 
> Barry, I'm unclear on what exactly is happening.  Where does the
> Unicode come from?  You implied that your code worked under 1.5.2,
> which doesn't support Unicode.  How can code that works under 1.5.2
> suddenly start producing Unicode strings?  Unless you're now applying
> the existing code to new (Unicode) input data -- in which case, yes,
> we expect that fixes are sometimes needed.
> 
> The smtplib problem may be easily explained -- AFAIK, the SMTP
> protocol doesn't support Unicode, and the module isn't Unicode-aware,
> so it is probably writing garbage to the socket.
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From fdrake at beopen.com  Tue Sep 19 01:45:55 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 18 Sep 2000 19:45:55 -0400 (EDT)
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <000201c021c1$ef71c7f0$060210ac@private>
References: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>
	<000201c021c1$ef71c7f0$060210ac@private>
Message-ID: <14790.43315.8034.192884@cj42289-a.reston1.va.home.com>

Barry Scott writes:
 > At the risk of having my head bitten off again...
 > 
 > Why don't you tell people how to report bugs in python on the web site
 > or the documentation?
 > 
 > I'd expect this info in the docs and on the web site for python.

  Good point.  I think this should be available at both locations as
well.  I'll see what I can do about the documentation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gward at mems-exchange.org  Tue Sep 19 01:55:35 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 18 Sep 2000 19:55:35 -0400
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Sep 18, 2000 at 07:34:30PM -0400
References: <20000918192859.A12253@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>
Message-ID: <20000918195535.A19131@ludwig.cnri.reston.va.us>

On 18 September 2000, Tim Peters said:
> Nobody knows.  I've been using Library for .py files under Lib/, and Modules
> for anything written in C whose name works in an "import".  Other people are
> doing other things, but they're wrong <wink>.

That's what I suspected.  I've just reclassified a couple of bugs.  I
left ambiguous ones where they were.

        Greg


From barry at scottb.demon.co.uk  Tue Sep 19 02:05:17 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 01:05:17 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEPDDJAA.MarkH@ActiveState.com>
Message-ID: <000901c021cd$4a9b2df0$060210ac@private>

> But regardless of where Barry's Unicode objects come from, his point
> remains open.  Do we consider the library's lack of Unicode awareness a
> bug, or do we drop any pretence of string and unicode objects being
> interchangeable?
> 
> As a related issue, do we consider that str(unicode_ob) often fails is a
> problem?  The users on c.l.py appear to...
> 
> Mark.

Exactly.

I want unicode from Mark's code, unicode is goodness.

But the principle of least astonishment may well be broken in the library,
indeed in the language.

It took me 40 minutes to prove that the unicode came from Mark's code and
I know the code involved intimately. Debugging these failures is tedious.

I don't have an opinion as to the best resolution yet.

One option would be for Mark's code to default to string. But that does not
help once someone chooses to enable unicode in Mark's code.

Maybe '%s' % u'x' should return 'x' not u'x' and u'%s' % 's' return u's'

Maybe 's' + u'x' should return 'sx' not u'sx'. and u's' + 'x' returns u'sx'

The above 2 maybe's would have hidden the problem in my code, baring exceptions.

	Barry



From barry at scottb.demon.co.uk  Tue Sep 19 02:13:33 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 01:13:33 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <200009182351.SAA03195@cj20424-a.reston1.va.home.com>
Message-ID: <000a01c021ce$72b5cab0$060210ac@private>

What README? Its not on my Start - Programs - Python 2.0 menu.

You don't mean I have to look on the disk do you :-)

	Barry

> -----Original Message-----
> From: guido at cj20424-a.reston1.va.home.com
> [mailto:guido at cj20424-a.reston1.va.home.com]On Behalf Of Guido van
> Rossum
> Sent: 19 September 2000 00:51
> To: Barry Scott
> Cc: PythonDev
> Subject: Re: [Python-Dev] How do you want bugs reported against 2.0
> beta?
> 
> 
> > At the risk of having my head bitten off again...
> 
> Don't worry, it's only a virtual bite... :-)
> 
> > Why don't you tell people how to report bugs in python on the web site
> > or the documentation?
> > 
> > I'd expect this info in the docs and on the web site for python.
> 
> In the README file:
> 
>     Bug reports
>     -----------
> 
>     To report or search for bugs, please use the Python Bug
>     Tracker at http://sourceforge.net/bugs/?group_id=5470.
> 
> But I agree that nobody reads the README file any more.  So yes, it
> should be added to the website.  I don't think it belongs in the
> documentation pack, although Fred may disagree (where should it be
> added?).
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 


From tim_one at email.msn.com  Tue Sep 19 02:22:13 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 18 Sep 2000 20:22:13 -0400
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <000701c021c4$3412d550$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENBHGAA.tim_one@email.msn.com>

[Barry Scott]
> There needs to be a set of benchmarks that can be used to test
> the effect of any changes. Is there a set that exist already that
> can be used?

None adequate.  Calls for volunteers in the past have been met with silence.

Lib/test/pyttone.py is remarkable in that it be the least typical of all
Python programs <0.4 wink>.  It seems a good measure of how long it takes to
make a trip around the eval loop, though.

Marc-Andre Lemburg put together a much fancier suite, that times a wide
variety of basic Python operations and constructs more-or-less in isolation
from each other.  It can be very helpful in pinpointing specific timing
regressions.

That's it.




From tim_one at email.msn.com  Tue Sep 19 06:44:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 19 Sep 2000 00:44:56 -0400
Subject: [Python-Dev] test_minidom now failing on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com>

http://sourceforge.net/bugs/?func=detailbug&bug_id=114775&group_id=5470

Add info (fails on Linux?  Windows-specific?) or fix or something; assigned
to Paul.




From guido at beopen.com  Tue Sep 19 08:05:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 01:05:55 -0500
Subject: [Python-Dev] test_minidom now failing on Windows
In-Reply-To: Your message of "Tue, 19 Sep 2000 00:44:56 -0400."
             <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com> 
Message-ID: <200009190605.BAA01019@cj20424-a.reston1.va.home.com>

> http://sourceforge.net/bugs/?func=detailbug&bug_id=114775&group_id=5470
> 
> Add info (fails on Linux?  Windows-specific?) or fix or something; assigned
> to Paul.

It's obviously broken.  The test output contains numbers that are
specific per run:

<xml.dom.minidom.Document instance at 0xa104c8c>

and

[('168820100<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}"), ('168926628<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168722260<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168655020<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168650868<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168663308<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168846892<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('169039972<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168666508<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}"), ('168730780<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}")]

Paul, please fix this!!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From martin at loewis.home.cs.tu-berlin.de  Tue Sep 19 10:13:16 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 19 Sep 2000 10:13:16 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>

> The smtplib problem may be easily explained -- AFAIK, the SMTP
> protocol doesn't support Unicode, and the module isn't
> Unicode-aware, so it is probably writing garbage to the socket.

I've investigated this somewhat, and noticed the cause of the problem.
The send method of the socket passes the raw memory representation of
the Unicode object to send(2). On i386, this comes out as UTF-16LE.

It appears that this behaviour is not documented anywhere (where is
the original specification of the Unicode type, anyway).

I believe this behaviour is a bug, on the grounds of being
confusing. The same holds for writing a Unicode string to a file in
binary mode. Again, it should not write out the internal
representation. Or else, why doesn't file.write(42) work? I want that
it writes the internal representation in binary :-)

So in essence, I suggest that the Unicode object does not implement
the buffer interface. If that has any undesirable consequences (which
ones?), I suggest that 'binary write' operations (sockets, files)
explicitly check for Unicode objects, and either reject them, or
invoke the system encoding (i.e. ASCII). 

In the case of smtplib, this would do the right thing: the protocol
requires ASCII commands, so if anybody passes a Unicode string with
characters outside ASCII, you'd get an error.

Regards,
Martin



From effbot at telia.com  Tue Sep 19 10:35:29 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 19 Sep 2000 10:35:29 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>
Message-ID: <00cd01c02214$94c4f540$766940d5@hagrid>

martin wrote:

> I've investigated this somewhat, and noticed the cause of the problem.
> The send method of the socket passes the raw memory representation of
> the Unicode object to send(2). On i386, this comes out as UTF-16LE.
...
> I believe this behaviour is a bug, on the grounds of being
> confusing. The same holds for writing a Unicode string to a file in
> binary mode. Again, it should not write out the internal
> representation. Or else, why doesn't file.write(42) work? I want that
> it writes the internal representation in binary :-)
...
> So in essence, I suggest that the Unicode object does not implement
> the buffer interface.

I agree.

</F>



From mal at lemburg.com  Tue Sep 19 10:35:33 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 10:35:33 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <LNBBLJKPBEHFEDALKOLCKENBHGAA.tim_one@email.msn.com>
Message-ID: <39C72555.E14D747C@lemburg.com>

Tim Peters wrote:
> 
> [Barry Scott]
> > There needs to be a set of benchmarks that can be used to test
> > the effect of any changes. Is there a set that exist already that
> > can be used?
> 
> None adequate.  Calls for volunteers in the past have been met with silence.
> 
> Lib/test/pyttone.py is remarkable in that it be the least typical of all
> Python programs <0.4 wink>.  It seems a good measure of how long it takes to
> make a trip around the eval loop, though.
> 
> Marc-Andre Lemburg put together a much fancier suite, that times a wide
> variety of basic Python operations and constructs more-or-less in isolation
> from each other.  It can be very helpful in pinpointing specific timing
> regressions.

Plus it's extensible, so you can add whatever test you feel you
need by simply dropping in a new module and editing a Setup
module. pybench is available from my Python Pages.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Tue Sep 19 11:02:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 11:02:46 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>
Message-ID: <39C72BB6.A45A8E77@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > The smtplib problem may be easily explained -- AFAIK, the SMTP
> > protocol doesn't support Unicode, and the module isn't
> > Unicode-aware, so it is probably writing garbage to the socket.
> 
> I've investigated this somewhat, and noticed the cause of the problem.
> The send method of the socket passes the raw memory representation of
> the Unicode object to send(2). On i386, this comes out as UTF-16LE.

The send method probably uses "s#" to write out the data. Since
this maps to the getreadbuf buffer slot, the Unicode object returns
a pointer to the internal buffer.
 
> It appears that this behaviour is not documented anywhere (where is
> the original specification of the Unicode type, anyway).

Misc/unicode.txt has it all. Documentation for PyArg_ParseTuple()
et al. is in Doc/ext/ext.tex.
 
> I believe this behaviour is a bug, on the grounds of being
> confusing. The same holds for writing a Unicode string to a file in
> binary mode. Again, it should not write out the internal
> representation. Or else, why doesn't file.write(42) work? I want that
> it writes the internal representation in binary :-)

This was discussed on python-dev at length earlier this year.
The outcome was that files opened in binary mode should write
raw object data to the file (using getreadbuf) while file's opened
in text mode should write character data (using getcharbuf).
 
Note that Unicode objects are the first to make a difference
between getcharbuf and getreadbuf.

IMHO, the bug really is in getargs.c: "s" uses getcharbuf while
"s#" uses getreadbuf. Ideal would be using "t"+"t#" exclusively
for getcharbuf and "s"+"s#" exclusively for getreadbuf, but I guess
common usage prevents this.

> So in essence, I suggest that the Unicode object does not implement
> the buffer interface. If that has any undesirable consequences (which
> ones?), I suggest that 'binary write' operations (sockets, files)
> explicitly check for Unicode objects, and either reject them, or
> invoke the system encoding (i.e. ASCII).

It's too late for any generic changes in the Unicode area.

The right thing to do is to make the *tools* Unicode aware, since
you can't really expect the Unicode-string integration mechanism 
to fiddle things right in every possible case out there.

E.g. in the above case it is clear that 8-bit text is being sent over
the wire, so the smtplib module should explicitly call the .encode()
method to encode the data into whatever encoding is suitable.

> In the case of smtplib, this would do the right thing: the protocol
> requires ASCII commands, so if anybody passes a Unicode string with
> characters outside ASCII, you'd get an error.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Tue Sep 19 11:13:13 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 11:13:13 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <000901c021cd$4a9b2df0$060210ac@private>
Message-ID: <39C72E29.6593F920@lemburg.com>

Barry Scott wrote:
> 
> > But regardless of where Barry's Unicode objects come from, his point
> > remains open.  Do we consider the library's lack of Unicode awareness a
> > bug, or do we drop any pretence of string and unicode objects being
> > interchangeable?

Python's stdlib is *not* Unicode ready. This should be seen a project
for 2.1.

> > As a related issue, do we consider that str(unicode_ob) often fails is a
> > problem?  The users on c.l.py appear to...

It will only fail if the Unicode object is not compatible with the
default encoding. If users want to use a different encoding for
interfacing Unicode to strings they should call .encode explicitely,
possible through a helper function.

> > Mark.
> 
> Exactly.
> 
> I want unicode from Mark's code, unicode is goodness.
> 
> But the principle of least astonishment may well be broken in the library,
> indeed in the language.
> 
> It took me 40 minutes to prove that the unicode came from Mark's code and
> I know the code involved intimately. Debugging these failures is tedious.

To debug these things, simply switch off Unicode to string conversion
by editing site.py (look at the comments at the end of the module).
All conversion tries will then result in an exception.

> I don't have an opinion as to the best resolution yet.
> 
> One option would be for Mark's code to default to string. But that does not
> help once someone chooses to enable unicode in Mark's code.
> 
> Maybe '%s' % u'x' should return 'x' not u'x' and u'%s' % 's' return u's'
> 
> Maybe 's' + u'x' should return 'sx' not u'sx'. and u's' + 'x' returns u'sx'
> 
> The above 2 maybe's would have hidden the problem in my code, baring exceptions.

When designing the Unicode-string integration we decided to
use the same coercion rules as for numbers: always coerce to the
"bigger" type. Anything else would have caused even more
difficulties.

Again, what needs to be done is to make the tools Unicode aware,
not the magic ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fredrik at pythonware.com  Tue Sep 19 11:38:01 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 19 Sep 2000 11:38:01 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de> <39C72BB6.A45A8E77@lemburg.com>
Message-ID: <006601c0221d$4e55b690$0900a8c0@SPIFF>

mal wrote:

> > So in essence, I suggest that the Unicode object does not implement
> > the buffer interface. If that has any undesirable consequences (which
> > ones?), I suggest that 'binary write' operations (sockets, files)
> > explicitly check for Unicode objects, and either reject them, or
> > invoke the system encoding (i.e. ASCII).
> 
> It's too late for any generic changes in the Unicode area.

it's not too late to fix bugs.

> The right thing to do is to make the *tools* Unicode aware, since
> you can't really expect the Unicode-string integration mechanism 
> to fiddle things right in every possible case out there.

no, but people may expect Python to raise an exception instead
of doing something that is not only non-portable, but also clearly
wrong in most real-life cases.

</F>



From mal at lemburg.com  Tue Sep 19 12:34:40 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 12:34:40 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de> <39C72BB6.A45A8E77@lemburg.com> <006601c0221d$4e55b690$0900a8c0@SPIFF>
Message-ID: <39C74140.B4A31C60@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > > So in essence, I suggest that the Unicode object does not implement
> > > the buffer interface. If that has any undesirable consequences (which
> > > ones?), I suggest that 'binary write' operations (sockets, files)
> > > explicitly check for Unicode objects, and either reject them, or
> > > invoke the system encoding (i.e. ASCII).
> >
> > It's too late for any generic changes in the Unicode area.
> 
> it's not too late to fix bugs.

I doubt that we can fix all Unicode related bugs in the 2.0
stdlib before the final release... let's make this a project 
for 2.1.
 
> > The right thing to do is to make the *tools* Unicode aware, since
> > you can't really expect the Unicode-string integration mechanism
> > to fiddle things right in every possible case out there.
> 
> no, but people may expect Python to raise an exception instead
> of doing something that is not only non-portable, but also clearly
> wrong in most real-life cases.

I completely agree that the divergence between "s" and "s#"
is not ideal at all, but that's something the buffer interface
design has to fix (not the Unicode design) since this is a
general problem. AFAIK, no other object makes a difference
between getreadbuf and getcharbuf... this is why the problem
has never shown up before.

Grepping through the stdlib, there are lots of places where
"s#" is expected to work on raw data and others where
conversion to string would be more appropriate, so the one
true solution is not clear at all.

Here are some possible hacks to work-around the Unicode problem:

1. switch off getreadbuf slot

   This would break many IO-calls w/r to Unicode support.

2. make getreadbuf return the same as getcharbuf (i.e. ASCII data)

   This could work, but would break slicing and indexing 
   for e.g. a UTF-8 default encoding.   

3. leave things as they are implemented now and live with the
   consequences (mark the Python stdlib as not Unicode compatible)

   Not ideal, but leaves room for discussion.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From loewis at informatik.hu-berlin.de  Tue Sep 19 14:11:00 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Tue, 19 Sep 2000 14:11:00 +0200 (MET DST)
Subject: [Python-Dev] sizehint in readlines
Message-ID: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>

I've added support for the sizehint parameter in all places where it
was missing and the documentation referred to the file objects section
(socket, StringIO, cStringIO). The only remaining place with a
readlines function without sizehint is in multifile.py. I'll observe
that the documentation of this module is quite confused: it mentions a
str parameter for readline and readlines.

Should multifile.MultiFile.readlines also support the sizehint? (note
that read() deliberately does not support a size argument).

Regards,
Martin


From loewis at informatik.hu-berlin.de  Tue Sep 19 14:16:29 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Tue, 19 Sep 2000 14:16:29 +0200 (MET DST)
Subject: [Python-Dev] fileno function in file objects
Message-ID: <200009191216.OAA06594@pandora.informatik.hu-berlin.de>

Section 2.1.7.9 of the library reference explains that file objects
support a fileno method. Is that a mandatory operation on file-like
objects (e.g. StringIO)? If so, how should it be implemented? If not,
shouldn't the documentation declare it optional?

The same question for documented attributes: closed, mode, name,
softspace: need file-like objects to support them?

Regards,
Martin


From mal at lemburg.com  Tue Sep 19 14:42:24 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 14:42:24 +0200
Subject: [Python-Dev] sizehint in readlines
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
Message-ID: <39C75F30.D23CEEF0@lemburg.com>

Martin von Loewis wrote:
> 
> I've added support for the sizehint parameter in all places where it
> was missing and the documentation referred to the file objects section
> (socket, StringIO, cStringIO). The only remaining place with a
> readlines function without sizehint is in multifile.py. I'll observe
> that the documentation of this module is quite confused: it mentions a
> str parameter for readline and readlines.
> 
> Should multifile.MultiFile.readlines also support the sizehint? (note
> that read() deliberately does not support a size argument).

Since it is an optional hint for the implementation, I'd suggest
adding the optional parameter without actually making any use of
it. The interface should be there though.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Tue Sep 19 15:01:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 15:01:34 +0200
Subject: [Python-Dev] Deja-Search on python.org defunct
Message-ID: <39C763AE.4B126CB1@lemburg.com>

The search button on python.org doesn't search the c.l.p newsgroup
anymore, but instead doea a search over all newsgroups.

This link works:

http://www.deja.com/[ST_rn=ps]/qs.xp?ST=PS&svcclass=dnyr&firstsearch=yes&QRY=search_string_goes_here&defaultOp=AND&DBS=1&OP=dnquery.xp&LNG=english&subjects=&groups=comp.lang.python+comp.lang.python.announce&authors=&fromdate=&todate=&showsort=score&maxhits=25

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido at beopen.com  Tue Sep 19 16:28:42 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 09:28:42 -0500
Subject: [Python-Dev] sizehint in readlines
In-Reply-To: Your message of "Tue, 19 Sep 2000 14:11:00 +0200."
             <200009191211.OAA06549@pandora.informatik.hu-berlin.de> 
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de> 
Message-ID: <200009191428.JAA02596@cj20424-a.reston1.va.home.com>

> I've added support for the sizehint parameter in all places where it
> was missing and the documentation referred to the file objects section
> (socket, StringIO, cStringIO). The only remaining place with a
> readlines function without sizehint is in multifile.py. I'll observe
> that the documentation of this module is quite confused: it mentions a
> str parameter for readline and readlines.

That's one for Fred...

> Should multifile.MultiFile.readlines also support the sizehint? (note
> that read() deliberately does not support a size argument).

I don't care about it here -- that API is clearly substandard.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido at beopen.com  Tue Sep 19 16:33:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 09:33:02 -0500
Subject: [Python-Dev] fileno function in file objects
In-Reply-To: Your message of "Tue, 19 Sep 2000 14:16:29 +0200."
             <200009191216.OAA06594@pandora.informatik.hu-berlin.de> 
References: <200009191216.OAA06594@pandora.informatik.hu-berlin.de> 
Message-ID: <200009191433.JAA02626@cj20424-a.reston1.va.home.com>

> Section 2.1.7.9 of the library reference explains that file objects
> support a fileno method. Is that a mandatory operation on file-like
> objects (e.g. StringIO)? If so, how should it be implemented? If not,
> shouldn't the documentation declare it optional?
> 
> The same question for documented attributes: closed, mode, name,
> softspace: need file-like objects to support them?

fileno() (and isatty()) is OS specific and only works if there *is* an
underlying file number.  It should not be implemented (not even as
raising an exception) if it isn't there.

Support for softspace is needed when you expect to be printing to a
file.

The others are implementation details of the built-in file object, but
would be nice to have if they can be implemented; code that requires
them is not fully supportive of file-like objects.

Note that this (and other, similar issues) is all because Python
doesn't have a standard class hierarchy.  I expect that we'll fix all
this in Python 3000.  Until then, I guess we have to muddle forth...

BTW, did you check in test cases for all the methods you fixed?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From bwarsaw at beopen.com  Tue Sep 19 17:43:15 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 19 Sep 2000 11:43:15 -0400 (EDT)
Subject: [Python-Dev] fileno function in file objects
References: <200009191216.OAA06594@pandora.informatik.hu-berlin.de>
	<200009191433.JAA02626@cj20424-a.reston1.va.home.com>
Message-ID: <14791.35219.817065.241735@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Note that this (and other, similar issues) is all because
    GvR> Python doesn't have a standard class hierarchy.

Or a formal interface mechanism.

-Barry


From bwarsaw at beopen.com  Tue Sep 19 17:43:50 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 19 Sep 2000 11:43:50 -0400 (EDT)
Subject: [Python-Dev] sizehint in readlines
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
	<200009191428.JAA02596@cj20424-a.reston1.va.home.com>
Message-ID: <14791.35254.565129.298375@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    >> Should multifile.MultiFile.readlines also support the sizehint?
    >> (note that read() deliberately does not support a size
    >> argument).

    GvR> I don't care about it here -- that API is clearly
    GvR> substandard.

Indeed!
-Barry


From klm at digicool.com  Tue Sep 19 20:25:04 2000
From: klm at digicool.com (Ken Manheimer)
Date: Tue, 19 Sep 2000 14:25:04 -0400 (EDT)
Subject: [Python-Dev] fileno function in file objects - Interfaces
 Scarecrow
In-Reply-To: <14791.35219.817065.241735@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0009191357370.24497-200000@korak.digicool.com>

Incidentally...

On Tue, 19 Sep 2000, Barry A. Warsaw wrote:

> >>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:
> 
>     GvR> Note that this (and other, similar issues) is all because
>     GvR> Python doesn't have a standard class hierarchy.
> 
> Or a formal interface mechanism.

Incidentally, jim/Zope is going forward with something like the interfaces
strawman - the "scarecrow" - that jim proposed at IPC?7?.  I don't know if
a PEP would have made any sense for 2.x, so maybe it's just as well we
haven't had time.  In the meanwhile, DC will get a chance to get
experience with and refine it... 

Anyway, for anyone that might be interested, i'm attaching a copy of
python/lib/Interfaces/README.txt from a recent Zope2 checkout.  I was
pretty enthusiastic about it when jim originally presented the scarecrow,
and on skimming it now it looks very cool.  (I'm not getting it all on my
quick peruse, and i suspect there's some contortions that wouldn't be
necessary if it were happening more closely coupled with python
development - but what jim sketches out is surprising sleek,
regardless...)

ken
klm at digicool.com
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: README.txt
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000919/d42b6c84/attachment.txt>

From martin at loewis.home.cs.tu-berlin.de  Tue Sep 19 22:48:53 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 19 Sep 2000 22:48:53 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de>

> I doubt that we can fix all Unicode related bugs in the 2.0
> stdlib before the final release... let's make this a project 
> for 2.1.

Exactly my feelings. Since we cannot possibly fix all problems, we may
need to change the behaviour later.

If we now silently do the wrong thing, silently changing it to the
then-right thing in 2.1 may break peoples code. So I'm asking that
cases where it does not clearly do the right thing produces an
exception now; we can later fix it to accept more cases, should need
occur.

In the specific case, dropping support for Unicode output in binary
files is the right thing. We don't know what the user expects, so it
is better to produce an exception than to silently put incorrect bytes
into the stream - that is a bug that we still can fix.

The easiest way with the clearest impact is to drop the buffer
interface in unicode objects. Alternatively, not supporting them in
for s# also appears reasonable. Users experiencing the problem in
testing will then need to make an explicit decision how they want to
encode the Unicode objects.

If any expedition of the issue is necessary, I can submit a bug report,
and propose a patch.

Regards,
Martin


From guido at beopen.com  Wed Sep 20 00:00:34 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 17:00:34 -0500
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: Your message of "Tue, 19 Sep 2000 22:48:53 +0200."
             <200009192048.WAA01414@loewis.home.cs.tu-berlin.de> 
References: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de> 
Message-ID: <200009192200.RAA01853@cj20424-a.reston1.va.home.com>

> > I doubt that we can fix all Unicode related bugs in the 2.0
> > stdlib before the final release... let's make this a project 
> > for 2.1.
> 
> Exactly my feelings. Since we cannot possibly fix all problems, we may
> need to change the behaviour later.
> 
> If we now silently do the wrong thing, silently changing it to the
> then-right thing in 2.1 may break peoples code. So I'm asking that
> cases where it does not clearly do the right thing produces an
> exception now; we can later fix it to accept more cases, should need
> occur.
> 
> In the specific case, dropping support for Unicode output in binary
> files is the right thing. We don't know what the user expects, so it
> is better to produce an exception than to silently put incorrect bytes
> into the stream - that is a bug that we still can fix.
> 
> The easiest way with the clearest impact is to drop the buffer
> interface in unicode objects. Alternatively, not supporting them in
> for s# also appears reasonable. Users experiencing the problem in
> testing will then need to make an explicit decision how they want to
> encode the Unicode objects.
> 
> If any expedition of the issue is necessary, I can submit a bug report,
> and propose a patch.

Sounds reasonable to me (but I haven't thought of all the issues).

For writing binary Unicode strings, one can use

  f.write(u.encode("utf-16"))		# Adds byte order mark
  f.write(u.encode("utf-16-be"))	# Big-endian
  f.write(u.encode("utf-16-le"))	# Little-endian

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal at lemburg.com  Tue Sep 19 23:29:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 23:29:06 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de> <200009192200.RAA01853@cj20424-a.reston1.va.home.com>
Message-ID: <39C7DAA2.A04E5008@lemburg.com>

Guido van Rossum wrote:
> 
> > > I doubt that we can fix all Unicode related bugs in the 2.0
> > > stdlib before the final release... let's make this a project
> > > for 2.1.
> >
> > Exactly my feelings. Since we cannot possibly fix all problems, we may
> > need to change the behaviour later.
> >
> > If we now silently do the wrong thing, silently changing it to the
> > then-right thing in 2.1 may break peoples code. So I'm asking that
> > cases where it does not clearly do the right thing produces an
> > exception now; we can later fix it to accept more cases, should need
> > occur.
> >
> > In the specific case, dropping support for Unicode output in binary
> > files is the right thing. We don't know what the user expects, so it
> > is better to produce an exception than to silently put incorrect bytes
> > into the stream - that is a bug that we still can fix.
> >
> > The easiest way with the clearest impact is to drop the buffer
> > interface in unicode objects. Alternatively, not supporting them in
> > for s# also appears reasonable. Users experiencing the problem in
> > testing will then need to make an explicit decision how they want to
> > encode the Unicode objects.
> >
> > If any expedition of the issue is necessary, I can submit a bug report,
> > and propose a patch.
> 
> Sounds reasonable to me (but I haven't thought of all the issues).
> 
> For writing binary Unicode strings, one can use
> 
>   f.write(u.encode("utf-16"))           # Adds byte order mark
>   f.write(u.encode("utf-16-be"))        # Big-endian
>   f.write(u.encode("utf-16-le"))        # Little-endian

Right.

Possible ways to fix this:

1. disable Unicode's getreadbuf slot

   This would effectively make Unicode object unusable for
   all APIs which use "s#"... and probably give people a lot
   of headaches. OTOH, it would probably motivate lots of
   users to submit patches for the stdlib which makes it
   Unicode aware (hopefully ;-)

2. same as 1., but also make "s#" fall back to getcharbuf
   in case getreadbuf is not defined

   This would make Unicode objects compatible with "s#", but
   still prevent writing of binary data: getcharbuf returns
   the Unicode object encoded using the default encoding which
   is ASCII per default.

3. special case "s#" in some way to handle Unicode or to
   raise an exception pointing explicitly to the problem
   and its (possible) solution

I'm not sure which of these paths to take. Perhaps solution
2. is the most feasable compromise between "exceptions everywhere"
and "encoding confusion".

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido at beopen.com  Wed Sep 20 00:47:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 17:47:11 -0500
Subject: [Python-Dev] Missing API in re modle
Message-ID: <200009192247.RAA02122@cj20424-a.reston1.va.home.com>

When investigating and fixing Tim's report that the Replace dialog in
IDLE was broken, I realized that there's an API missing from the re
module.

For search-and-replace, IDLE uses a regular expression to find the
next match, and then needs to do whatever sub() does to that match.
But there's no API to spell "whatever sub() does"!  It's not safe to
call sub() on just the matching substring -- the match might depend on
context.

It seems that a new API is needed.  I propose to add the following
method of match objects:

  match.expand(repl)

    Return the string obtained by doing backslash substitution as for
    the sub() method in the replacement string: expansion of \n ->
    linefeed etc., and expansion of numeric backreferences (\1, \2,
    ...) and named backreferences (\g<1>, \g<name>, etc.);
    backreferences refer to groups in the match object.

Or am I missing something and is there already a way to do this?

(Side note: the SRE code does some kind of compilation on the
replacement template; I'd like to see this cached, as otherwise IDLE's
replace-all button will take forever...)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas at xs4all.net  Wed Sep 20 15:23:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 20 Sep 2000 15:23:10 +0200
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <20000917144614.A25718@ActiveState.com>; from trentm@ActiveState.com on Sun, Sep 17, 2000 at 02:46:14PM -0700
References: <20000917142718.A25180@ActiveState.com> <20000917144614.A25718@ActiveState.com>
Message-ID: <20000920152309.A6675@xs4all.nl>

On Sun, Sep 17, 2000 at 02:46:14PM -0700, Trent Mick wrote:
> On Sun, Sep 17, 2000 at 02:27:18PM -0700, Trent Mick wrote:
> > 
> > I get the following error trying to import _tkinter in a Python 2.0 build:
> > 
> > > ./python
> > ./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory
> > 

> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to /usr/local/lib)
> and everything is hunky dory. I presumed that /usr/local/lib would be
> on the default search path for shared libraries. Bad assumption I guess.

On *some* ELF systems (at least Linux and BSDI) you can add /usr/local/lib
to /etc/ld.so.conf and rerun 'ldconfig' (which builds the cachefile
/etc/ld.so.cache, which is used as the 'searchpath'.) I personally find this
a lot better approach than the LD_LIBRARY_PATH or -R/-rpath approaches,
especially for 'system-wide' shared libraries (you can use one of the other
approaches if you want to tie a specific binary to a specific shared library
in a specific directory, or have a binary use a different shared library
(from a different directory) in some of the cases -- though you can use
LD_PRELOAD and such for that as well.)

If you tie your binary to a specific directory, you might lose portability,
necessitating ugly script-hacks that find & set a proper LD_LIBRARY_PATH or
LD_PRELOAD and such before calling the real program. I'm not sure if recent
SunOS's support something like ld.so.conf, but old ones didn't, and I sure
wish they did ;)

Back-from-vacation-and-trying-to-catch-up-on-2000+-mails-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal at lemburg.com  Wed Sep 20 16:22:44 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 20 Sep 2000 16:22:44 +0200
Subject: [Python-Dev] Python syntax checker ?
Message-ID: <39C8C834.5E3B90E7@lemburg.com>

Would it be possible to write a Python syntax checker that doesn't
stop processing at the first error it finds but instead tries
to continue as far as possible (much like make -k) ?

If yes, could the existing Python parser/compiler be reused for
such a tool ?

I was asked to write a tool which checks Python code and returns
a list of found errors (syntax error and possibly even some
lint warnings) instead of stopping at the first error it finds.

Thanks for any tips,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From loewis at informatik.hu-berlin.de  Wed Sep 20 19:07:06 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Wed, 20 Sep 2000 19:07:06 +0200 (MET DST)
Subject: [Python-Dev] Python syntax checker ?
Message-ID: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>

> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries to
> continue as far as possible (much like make -k) ?

In "Compilerbau", this is referred to as "Fehlerstabilisierung". I
suggest to have a look at the dragon book (Aho, Seti, Ullman).

The common approch is to insert or remove tokens, using some
heuristics. In YACC, it is possible to add error productions to the
grammar. Whenever an error occurs, the parser assigns all tokens to
the "error" non-terminal until it concludes that it can perform a
reduce action.

A similar approach might work for the Python Grammar. For each
production, you'd define a set of stabilization tokens. If these are
encountered, then the rule would be considered complete. Everything is
consumed until a stabilization token is found.

For example, all expressions could be stabilized with a
keyword. I.e. if you encounter a syntax error inside an expression,
you ignore all tokens until you see 'print', 'def', 'while', etc.

In some cases, it may be better to add input rather than removing
it. For example, if you get an "inconsistent dedent" error, you could
assume that this really was a consistent dedent, or you could assume
it was not meant as a dedent at all. Likewise, if you get a
single-quote start-of-string, with no single-quote until end-of-line,
you just should assume there was one.

Adding error productions to ignore input until stabilization may be
feasible on top of the existing parser. Adding tokens in the right
place is probably harder - I'd personally go for a pure Python
solution, that operates on Grammar/Grammar.

Regards,
Martin



From tismer at appliedbiometrics.com  Wed Sep 20 18:35:50 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Wed, 20 Sep 2000 19:35:50 +0300
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009082048.WAA14671@python.inrialpes.fr> <39B951CC.3C0AE801@lemburg.com>
Message-ID: <39C8E766.18D9BDD8@appliedbiometrics.com>


"M.-A. Lemburg" wrote:
> 
> Vladimir Marangozov wrote:
> >
> > M.-A. Lemburg wrote:
> > >
> > > Fredrik Lundh wrote:
> > > >
> > > > mal wrote:

...

> > Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.
> 
> Oh, I didn't try to reduce Fredrik's efforts at all. To the
> contrary: I'm still looking forward to his melted down version
> of the database and the ctype tables.

Howdy. It may be that not you but I will melt /F's efforts
to dust, since I might have one or two days of time
to finish my long time ago promised code generator :-)
Well, probably just merging our dust :-)

> > Every bit costs money, and that's why
> > Van Jacobson packet-header compression has been invented and is
> > massively used. Whole armies of researchers are currently trying to
> > compensate the irresponsible bloatware that people of the higher
> > layers are imposing on them <wink>. Careful!
> 
> True, but why the hurry ?

I have no reason to complain since I didn't do my homework.
Anyway, a partially bloated distribution might be harmful
for Python's reputation. When looking through the whole
source set, there is no bloat anywhere. Everything is
well thought out, and fairly optimized between space and speed.
Well, there is this one module which cries for being replaced,
and which still prevents *me* from moving to Python 1.6 :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com


From martin at loewis.home.cs.tu-berlin.de  Wed Sep 20 21:22:24 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 20 Sep 2000 21:22:24 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
Message-ID: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>

I just tried to disable the getreadbufferproc on Unicode objects. Most
of the test suite continues to work. 

test_unicode fails, which is caused by "s#" not working anymore when
in readbuffer_encode when testing the unicode_internal encoding. That
could be fixed (*).

More concerning, sre fails when matching a unicode string. sre uses
the getreadbufferproc to get to the internal representation. If it has
sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
buffer (?!?).

I'm not sure what the right solution would be in this case: I *think*
sre should have more specific knowledge of Unicode objects, so it
should support objects with a buffer interface representing a 1-byte
character string, or Unicode objects. Actually, is there anything
wrong with sre operating on string and unicode objects only? It
requires that the buffer has a single segment, anyway...

Regards,
Martin

(*) The 'internal encoding' function should directly get to the
representation of the unicode object, and readbuffer_encode could
become Python:

def readbuffer_encode(o,errors="strict"):
  b = buffer(o)
  return str(b),len(b)

or be removed altogether, as it would (rightfully) stop working on
unicode objects.


From effbot at telia.com  Wed Sep 20 21:57:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 20 Sep 2000 21:57:16 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>
Message-ID: <021801c0233c$fec04fc0$766940d5@hagrid>

martin wrote:
> More concerning, sre fails when matching a unicode string. sre uses
> the getreadbufferproc to get to the internal representation. If it has
> sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
> buffer (?!?).

...or an integer buffer.

(who says you can only use regular expressions on character
strings? ;-)

> I'm not sure what the right solution would be in this case: I *think*
> sre should have more specific knowledge of Unicode objects, so it
> should support objects with a buffer interface representing a 1-byte
> character string, or Unicode objects. Actually, is there anything
> wrong with sre operating on string and unicode objects only?

let's add a special case for unicode strings.  I'm actually using
the integer buffer support (don't ask), so I'd prefer to leave it
in there.

no time tonight, but I can check in a fix tomorrow.

</F>



From thomas at xs4all.net  Wed Sep 20 22:02:48 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 20 Sep 2000 22:02:48 +0200
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Wed, Sep 20, 2000 at 07:07:06PM +0200
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <20000920220248.E6675@xs4all.nl>

On Wed, Sep 20, 2000 at 07:07:06PM +0200, Martin von Loewis wrote:
> Adding error productions to ignore input until stabilization may be
> feasible on top of the existing parser. Adding tokens in the right
> place is probably harder - I'd personally go for a pure Python
> solution, that operates on Grammar/Grammar.

Don't forget that there are two kinds of SyntaxErrors in Python: those that
are generated by the tokenizer/parser, and those that are actually generated
by the (bytecode-)compiler. (inconsistent indent/dedent errors, incorrect
uses of (augmented) assignment, incorrect placing of particular keywords,
etc, are all generated while actually compiling the code.) Also, in order to
be really useful, the error-indicator would have to be pretty intelligent.
Imagine something like this:

if 1:

     doodle()

    forever()
    and_ever()
    <tons more code using 4-space indent>

With the current interpreter, that would generate a single warning, on the
line below the one that is the actual problem. If you continue searching for
errors, you'll get tons and tons of errors, all because the first line was
indented too far.

An easy way to work around it is probably to consider all tokenizer-errors
and some of the compiler-generated errors (like indent/dedent ones) as
really-fatal errors, and only handle the errors that are likely to managable
errors, skipping over the affected lines or considering them no-ops.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From martin at loewis.home.cs.tu-berlin.de  Wed Sep 20 22:50:30 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 20 Sep 2000 22:50:30 +0200
Subject: [Python-Dev] [ Bug #110676 ] fd.readlines() hangs (via popen3) (PR#385)
Message-ID: <200009202050.WAA02298@loewis.home.cs.tu-berlin.de>

I've closed your report at

http://sourceforge.net/bugs/?func=detailbug&bug_id=110676&group_id=5470

That is a bug in the application code. The slave tries to write 6000
bytes to stderr, and blocks after writing 4096 (number measured on
Linux, more generally, after _PC_PIPE_BUF bytes).  The server starts
reading on stdin, and blocks also, so you get a deadlock.  The proper
solution is to use 

import popen2

r,w,e = popen2.popen3 ( 'python slave.py' ) 
e.readlines() 
r.readlines() 
r.close() 
e.close() 
w.close() 

as the master, and 

import sys,posix 

e = sys.stderr.write 
w = sys.stdout.write 

e(400*'this is a test\n') 
posix.close(2) 
w(400*'this is another test\n') 

as the slave. Notice that stderr must be closed after writing all
data, or readlines won't return. Also notice that posix.close must be
used, as sys.stderr.close() won't close stderr (apparently due to
concerns that assigning to sys.stderr will silently close is, so no
further errors can be printed).

In general, it would be better to use select(2) on the files returned
from popen3, or spread the reading of the individual files onto
several threads.

Regards,
Martin


From MarkH at ActiveState.com  Thu Sep 21 01:37:31 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 21 Sep 2000 10:37:31 +1100
Subject: [Python-Dev] FW: [humorix] Unobfuscated Perl Code Contest
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEFJDKAA.MarkH@ActiveState.com>

And now for something completely different ;-)
--
Unobfuscated Perl Code Contest
September 16, 19100

The Perl Gazette has announced the winners in the First
Annual _Un_obfuscated Perl Code Contest.  First place went
to Edwin Fuller, who submitted this unobfuscated program:

#!/usr/bin/perl
print "Hello world!\n";

"This was definitely a challenging contest," said an
ecstatic Edwin Fuller. "I've never written a Perl program
before that didn't have hundreds of qw( $ @ % & * | ? / \ !
# ~ ) symbols.  I really had to summon all of my
programming skills to produce an unobfuscated program."

The judges in the contest learned that many programmers
don't understand the meaning of 'unobfuscated perl'.  For
instance, one participant sent in this 'Hello world!'
program:

#!/usr/bin/perl
$x='unob';
open OUT, ">$x.c";
print OUT <<HERE_DOC;
#include <stdio.h>
int main(void) { 
 FILE *f=fopen("$x.sh", "w");
 fprintf(f,"echo Hello world!\\n");
 fclose(f);
 system("chmod +x $x.sh");
 system("./$x.sh"); return 0; 
}
HERE_DOC
close OUT;
system("gcc $x.c -o $x && ./$x");

"As an experienced Perl monger," said one of the judges, "I
can instantly tell that this program spits out C source
code that spits out a shell script to print 'Hello
world!'.  But this code certainly does not qualify as
unobfuscated Perl -- I mean, most of it isn't even written
in Perl!"

He added, "Out of all of the entries, only two were
actually unobfuscated perl.  Everything else looked like
line noise -- or worse."

The second place winner, Mrs. Sea Pearl, submitted the
following code:

#!/usr/bin/perl
use strict;
# Do nothing, successfully
exit(0);

"I think everybody missed the entire point of this
contest," ranted one judge.  "Participants were supposed to
produce code that could actually be understood by somebody
other than a ten-year Perl veteran.  Instead, we get an
implementation of a Java Virtual Machine.  And a version of
the Linux kernel ported to Win32 Perl.  Sheesh!"

In response to the news, a rogue group of Perl hackers have
presented a plan to add a "use really_goddamn_strict"
pragma to the language that would enforce readability and
unobfuscation.  With this pragma in force, the Perl
compiler might say:

 Warning: Program contains zero comments.  You've probably
 never seen or used one before; they begin with a #
 symbol.  Please start using them or else a representative
 from the nearest Perl Mongers group will come to your
 house and beat you over the head with a cluestick.

 Warning: Program uses a cute trick at line 125 that might
 make sense in C.  But this isn't C!

 Warning: Code at line 412 indicates that programmer is an
 idiot. Please correct error between chair and monitor.

 Warning: While There's More Than One Way To Do It, your
 method at line 523 is particularly stupid.  Please try
 again.

 Warning: Write-only code detected between lines 612 and
 734. While this code is perfectly legal, you won't have
 any clue what it does in two weeks.  I recommend you start
 over.

 Warning: Code at line 1,024 is indistinguishable from line
 noise or the output of /dev/random

 Warning: Have you ever properly indented a piece of code
 in your entire life?  Evidently not.

 Warning: I think you can come up with a more descriptive
 variable name than "foo" at line 1,523.

 Warning: Programmer attempting to re-invent the wheel at
 line 2,231. There's a function that does the exact same
 thing on CPAN -- and it actually works.

 Warning: Perl tries to make the easy jobs easy without
 making the hard jobs impossible -- but your code at line
 5,123 is trying to make an easy job impossible.  

 Error: Programmer failed to include required string "All
 hail Larry Wall" within program.  Execution aborted due to
 compilation errors.

Of course, convincing programmers to actually use that
pragma is another matter.  "If somebody actually wanted to
write readable code, why would they use Perl?  Let 'em use
Python!" exclaimed one Usenet regular.  "So this pragma is
a waste of electrons, just like use strict and the -w
command line parameter."

-
Humorix:      Linux and Open Source(nontm) on a lighter note
Archive:      http://humbolt.nl.linux.org/lists/
Web site:     http://www.i-want-a-website.com/about-linux/

----- End forwarded message -----



From bwarsaw at beopen.com  Thu Sep 21 02:02:22 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 20 Sep 2000 20:02:22 -0400 (EDT)
Subject: [Python-Dev] forwarded message from noreply@sourceforge.net
Message-ID: <14793.20494.375237.320590@anthem.concentric.net>


For those of you who may not have received this message, please be
aware that SourceForge will have scheduled downtime this Friday night
until Saturday morning.

-Barry

-------------- next part --------------
An embedded message was scrubbed...
From: noreply at sourceforge.net
Subject: SourceForge:  Important Site News
Date: Tue, 12 Sep 2000 19:58:47 -0700
Size: 2802
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000920/fe211637/attachment.eml>

From tim_one at email.msn.com  Thu Sep 21 02:19:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 20 Sep 2000 20:19:41 -0400
Subject: [Python-Dev] forwarded message from noreply@sourceforge.net
In-Reply-To: <14793.20494.375237.320590@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com>

[Barry A. Warsaw]
> For those of you who may not have received this message, please be
> aware that SourceForge will have scheduled downtime this Friday night
> until Saturday morning.

... This move will take place on Friday night( Sept 22nd) at 10pm and
    continue to 8am Saturday morning (Pacific Standard Time).  During
    this time the site will be off-line as we make the physical change.

Looks to me like they started 30 hours early!  SF has been down more than up
all day, by my account.

So, for recreation in our idly desperate moments, let me recommend a quick
read, and especially to our friends at BeOpen, ActiveState and Secret Labs:

    http://linuxtoday.com/news_story.php3?ltsn=2000-09-20-006-21-OP-BZ-LF
    "Savor the Unmarketed Moment"
    "Marketers are drawn to money as surely as maggots were drawn
    to aforementioned raccoon ...
    The Bazaar is about to be blanketed with smog emitted by the
    Cathedral's smokestacks.  Nobody will be prevented from doing
    whatever he or she was doing before, but the oxygen level will
    be dropping and visibility will be impaired."

gasping-a-bit-from-the-branding-haze-himself<0.5-wink>-ly y'rs  - tim




From guido at beopen.com  Thu Sep 21 03:57:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 20 Sep 2000 20:57:39 -0500
Subject: [Python-Dev] SourceForge downtime postponed
In-Reply-To: Your message of "Wed, 20 Sep 2000 20:19:41 -0400."
             <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com> 
Message-ID: <200009210157.UAA05881@cj20424-a.reston1.va.home.com>

> Looks to me like they started 30 hours early!  SF has been down more than up
> all day, by my account.

Actually, they're back in business, and they impoved the Bugs manager!
(E.g. there are now group management facilities on the fromt page.)

They also mailed around today that the move won't be until mid
October.  That's good, insofar that it doesn't take SF away from us
while we're in the heat of the 2nd beta release!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido at beopen.com  Thu Sep 21 04:17:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 20 Sep 2000 21:17:20 -0500
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: Your message of "Wed, 20 Sep 2000 16:22:44 +0200."
             <39C8C834.5E3B90E7@lemburg.com> 
References: <39C8C834.5E3B90E7@lemburg.com> 
Message-ID: <200009210217.VAA06180@cj20424-a.reston1.va.home.com>

> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries
> to continue as far as possible (much like make -k) ?
> 
> If yes, could the existing Python parser/compiler be reused for
> such a tool ?
> 
> I was asked to write a tool which checks Python code and returns
> a list of found errors (syntax error and possibly even some
> lint warnings) instead of stopping at the first error it finds.

I had some ideas for this in the context of CP4E, and I even tried to
implement some, but didn['t get far enough to check it in anywhere.
Then I lost track of the code in the BeOpen move.  (It wasn't very
much.)

I used a completely different approach to parsing: look at the code
from the outside in, e.g. when you see

  def foo(a,b,c):
      print a
      for i in range(b):
          while x:
              print v
      else:
          bah()

you first notice that there's a line starting with a 'def' keyword
followed by some indented stuff; then you notice that the indented
stuff is a line starting with 'print', a line starting with 'for'
followed by more indented stuff, and a line starting with 'else' and
more indented stuff; etc.

This requires tokenization to succeed -- you need to know what are
continuation lines, and what are strings and comments, before you can
parse the rest; but I believe it can be made successful in the light
of quite severe problems.

(No time to elaborate. :-( )

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Thu Sep 21 12:32:23 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:32:23 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <39C9E3B7.5F9BFC01@lemburg.com>

Martin von Loewis wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries to
> > continue as far as possible (much like make -k) ?
> 
> In "Compilerbau", this is referred to as "Fehlerstabilisierung". I
> suggest to have a look at the dragon book (Aho, Seti, Ullman).
> 
> The common approch is to insert or remove tokens, using some
> heuristics. In YACC, it is possible to add error productions to the
> grammar. Whenever an error occurs, the parser assigns all tokens to
> the "error" non-terminal until it concludes that it can perform a
> reduce action.
> 
> A similar approach might work for the Python Grammar. For each
> production, you'd define a set of stabilization tokens. If these are
> encountered, then the rule would be considered complete. Everything is
> consumed until a stabilization token is found.
> 
> For example, all expressions could be stabilized with a
> keyword. I.e. if you encounter a syntax error inside an expression,
> you ignore all tokens until you see 'print', 'def', 'while', etc.
> 
> In some cases, it may be better to add input rather than removing
> it. For example, if you get an "inconsistent dedent" error, you could
> assume that this really was a consistent dedent, or you could assume
> it was not meant as a dedent at all. Likewise, if you get a
> single-quote start-of-string, with no single-quote until end-of-line,
> you just should assume there was one.
> 
> Adding error productions to ignore input until stabilization may be
> feasible on top of the existing parser. Adding tokens in the right
> place is probably harder - I'd personally go for a pure Python
> solution, that operates on Grammar/Grammar.

I think I'd prefer a Python solution too -- perhaps I could
start out with tokenizer.py and muddle along that way. pylint
from Aaron Waters should also provide some inspiration.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Thu Sep 21 12:42:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:42:46 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <39C8C834.5E3B90E7@lemburg.com> <200009210217.VAA06180@cj20424-a.reston1.va.home.com>
Message-ID: <39C9E626.6CF85658@lemburg.com>

Guido van Rossum wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries
> > to continue as far as possible (much like make -k) ?
> >
> > If yes, could the existing Python parser/compiler be reused for
> > such a tool ?
> >
> > I was asked to write a tool which checks Python code and returns
> > a list of found errors (syntax error and possibly even some
> > lint warnings) instead of stopping at the first error it finds.
> 
> I had some ideas for this in the context of CP4E, and I even tried to
> implement some, but didn['t get far enough to check it in anywhere.
> Then I lost track of the code in the BeOpen move.  (It wasn't very
> much.)
> 
> I used a completely different approach to parsing: look at the code
> from the outside in, e.g. when you see
> 
>   def foo(a,b,c):
>       print a
>       for i in range(b):
>           while x:
>               print v
>       else:
>           bah()
> 
> you first notice that there's a line starting with a 'def' keyword
> followed by some indented stuff; then you notice that the indented
> stuff is a line starting with 'print', a line starting with 'for'
> followed by more indented stuff, and a line starting with 'else' and
> more indented stuff; etc.

This is similar to my initial idea: syntax checking should continue
(or possibly restart) at the next found "block" after an error.

E.g. in Thomas' case:

if 1:

     doodle()

    forever()
    and_ever()
    <tons more code using 4-space indent>

the checker should continue at forever() possibly by restarting
checking at that line.

> This requires tokenization to succeed -- you need to know what are
> continuation lines, and what are strings and comments, before you can
> parse the rest; but I believe it can be made successful in the light
> of quite severe problems.

Looks like this is highly non-trivial job...

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Thu Sep 21 12:58:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:58:57 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>
Message-ID: <39C9E9F1.81C50A35@lemburg.com>

"Martin v. Loewis" wrote:
> 
> I just tried to disable the getreadbufferproc on Unicode objects. Most
> of the test suite continues to work.

Martin, haven't you read my last post to Guido ? 

Completely disabling getreadbuf is not a solution worth considering --
it breaks far too much code which the test suite doesn't even test,
e.g. MarkH's win32 stuff produces tons of Unicode object which
then can get passed to potentially all of the stdlib. The test suite
doesn't check these cases.
 
Here's another possible solution to the problem:

    Special case Unicode in getargs.c's code for "s#" only and leave
    getreadbuf enabled. "s#" could then return the default encoded
    value for the Unicode object while SRE et al. could still use 
    PyObject_AsReadBuffer() to get at the raw data.

> test_unicode fails, which is caused by "s#" not working anymore when
> in readbuffer_encode when testing the unicode_internal encoding. That
> could be fixed (*).

True. It currently relies on the fact the "s#" returns the internal
raw data representation for Unicode.
 
> More concerning, sre fails when matching a unicode string. sre uses
> the getreadbufferproc to get to the internal representation. If it has
> sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
> buffer (?!?).
> 
> I'm not sure what the right solution would be in this case: I *think*
> sre should have more specific knowledge of Unicode objects, so it
> should support objects with a buffer interface representing a 1-byte
> character string, or Unicode objects. Actually, is there anything
> wrong with sre operating on string and unicode objects only? It
> requires that the buffer has a single segment, anyway...

Ouch... but then again, it's a (documented ?) feature of re and
sre that they work on getreadbuf compatible objects, e.g.
mmap'ed files, so they'll have to use "s#" for accessing the
data.

Of course, with the above solution, SRE could use the 
PyObject_AsReadBuffer() API to get at the binary data.
 
> Regards,
> Martin
> 
> (*) The 'internal encoding' function should directly get to the
> representation of the unicode object, and readbuffer_encode could
> become Python:
> 
> def readbuffer_encode(o,errors="strict"):
>   b = buffer(o)
>   return str(b),len(b)
> 
> or be removed altogether, as it would (rightfully) stop working on
> unicode objects.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy at beopen.com  Thu Sep 21 16:58:54 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 21 Sep 2000 10:58:54 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/xml/sax __init__.py,1.6,1.7
In-Reply-To: <200009211447.HAA02917@slayer.i.sourceforge.net>
References: <200009211447.HAA02917@slayer.i.sourceforge.net>
Message-ID: <14794.8750.83880.932497@bitdiddle.concentric.net>

Lars,

I just fixed the last set of checkins you made to the xml package.
You left the system in a state where test_minidom failed.  When part
of the regression test fails, it causes severe problems for all other
developers.  They have no way to know if the change they've just made
to the tuple object (for example) causes the failure or not.  Thus, it
is essential that the CVS repository never be in a state where the
regression tests fail.

You're kind of new around here, so I'll let you off with a warning
<wink>.

Jeremy


From martin at loewis.home.cs.tu-berlin.de  Thu Sep 21 18:19:53 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 21 Sep 2000 18:19:53 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
In-Reply-To: <39C9E9F1.81C50A35@lemburg.com> (mal@lemburg.com)
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>
Message-ID: <200009211619.SAA00737@loewis.home.cs.tu-berlin.de>

> Martin, haven't you read my last post to Guido ? 

I've read

http://www.python.org/pipermail/python-dev/2000-September/016162.html

where you express a preference of disabling the getreadbuf slot, in
addition to special-casing Unicode objects in s#. I've just tested the
effects of your solution 1 on the test suite. Or are you referring to
a different message?

> Completely disabling getreadbuf is not a solution worth considering --
> it breaks far too much code which the test suite doesn't even test,
> e.g. MarkH's win32 stuff produces tons of Unicode object which
> then can get passed to potentially all of the stdlib. The test suite
> doesn't check these cases.

Do you have any specific examples of what else would break? Looking at
all occurences of 's#' in the standard library, I can't find a single
case where the current behaviour would be right - in all cases raising
an exception would be better. Again, any counter-examples?

>     Special case Unicode in getargs.c's code for "s#" only and leave
>     getreadbuf enabled. "s#" could then return the default encoded
>     value for the Unicode object while SRE et al. could still use 
>     PyObject_AsReadBuffer() to get at the raw data.

I think your option 2 is acceptable, although I feel the option 1
would expose more potential problems. What if an application
unknowingly passes a unicode object to md5.update? In testing, it may
always succeed as ASCII-only data is used, and it will suddenly start
breaking when non-ASCII strings are entered by some user. 

Using the internal rep would also be wrong in this case - the md5 hash
would depend on the byte order, which is probably not desired (*).

In any case, your option 2 would be a big improvement over the current
state, so I'll just shut up.

Regards,
Martin

(*) BTW, is there a meaningful way to define md5 for a Unicode string?


From DavidA at ActiveState.com  Thu Sep 21 18:32:30 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 21 Sep 2000 09:32:30 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Unobfuscated Perl Code Contest
Message-ID: <Pine.WNT.4.21.0009210931540.1868-100000@loom>

ObPython at the end...

---da

Unobfuscated Perl Code Contest
September 16, 19100

The Perl Gazette has announced the winners in the First
Annual _Un_obfuscated Perl Code Contest.  First place went
to Edwin Fuller, who submitted this unobfuscated program:

#!/usr/bin/perl
print "Hello world!\n";

"This was definitely a challenging contest," said an
ecstatic Edwin Fuller. "I've never written a Perl program
before that didn't have hundreds of qw( $ @ % & * | ? / \ !
# ~ ) symbols.  I really had to summon all of my
programming skills to produce an unobfuscated program."

The judges in the contest learned that many programmers
don't understand the meaning of 'unobfuscated perl'.  For
instance, one participant sent in this 'Hello world!'
program:

#!/usr/bin/perl
$x='unob';
open OUT, ">$x.c";
print OUT <<HERE_DOC;
#include <stdio.h>
int main(void) { 
 FILE *f=fopen("$x.sh", "w");
 fprintf(f,"echo Hello world!\\n");
 fclose(f);
 system("chmod +x $x.sh");
 system("./$x.sh"); return 0; 
}
HERE_DOC
close OUT;
system("gcc $x.c -o $x && ./$x");

"As an experienced Perl monger," said one of the judges, "I
can instantly tell that this program spits out C source
code that spits out a shell script to print 'Hello
world!'.  But this code certainly does not qualify as
unobfuscated Perl -- I mean, most of it isn't even written
in Perl!"

He added, "Out of all of the entries, only two were
actually unobfuscated perl.  Everything else looked like
line noise -- or worse."

The second place winner, Mrs. Sea Pearl, submitted the
following code:

#!/usr/bin/perl
use strict;
# Do nothing, successfully
exit(0);

"I think everybody missed the entire point of this
contest," ranted one judge.  "Participants were supposed to
produce code that could actually be understood by somebody
other than a ten-year Perl veteran.  Instead, we get an
implementation of a Java Virtual Machine.  And a version of
the Linux kernel ported to Win32 Perl.  Sheesh!"

In response to the news, a rogue group of Perl hackers have
presented a plan to add a "use really_goddamn_strict"
pragma to the language that would enforce readability and
unobfuscation.  With this pragma in force, the Perl
compiler might say:

 Warning: Program contains zero comments.  You've probably
 never seen or used one before; they begin with a #
 symbol.  Please start using them or else a representative
 from the nearest Perl Mongers group will come to your
 house and beat you over the head with a cluestick.

 Warning: Program uses a cute trick at line 125 that might
 make sense in C.  But this isn't C!

 Warning: Code at line 412 indicates that programmer is an
 idiot. Please correct error between chair and monitor.

 Warning: While There's More Than One Way To Do It, your
 method at line 523 is particularly stupid.  Please try
 again.

 Warning: Write-only code detected between lines 612 and
 734. While this code is perfectly legal, you won't have
 any clue what it does in two weeks.  I recommend you start
 over.

 Warning: Code at line 1,024 is indistinguishable from line
 noise or the output of /dev/random

 Warning: Have you ever properly indented a piece of code
 in your entire life?  Evidently not.

 Warning: I think you can come up with a more descriptive
 variable name than "foo" at line 1,523.

 Warning: Programmer attempting to re-invent the wheel at
 line 2,231. There's a function that does the exact same
 thing on CPAN -- and it actually works.

 Warning: Perl tries to make the easy jobs easy without
 making the hard jobs impossible -- but your code at line
 5,123 is trying to make an easy job impossible.  

 Error: Programmer failed to include required string "All
 hail Larry Wall" within program.  Execution aborted due to
 compilation errors.

Of course, convincing programmers to actually use that
pragma is another matter.  "If somebody actually wanted to
write readable code, why would they use Perl?  Let 'em use
Python!" exclaimed one Usenet regular.  "So this pragma is
a waste of electrons, just like use strict and the -w
command line parameter."




From guido at beopen.com  Thu Sep 21 19:44:25 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 21 Sep 2000 12:44:25 -0500
Subject: [Python-Dev] Disabling Unicode readbuffer interface
In-Reply-To: Your message of "Thu, 21 Sep 2000 18:19:53 +0200."
             <200009211619.SAA00737@loewis.home.cs.tu-berlin.de> 
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>  
            <200009211619.SAA00737@loewis.home.cs.tu-berlin.de> 
Message-ID: <200009211744.MAA17168@cj20424-a.reston1.va.home.com>

I haven't researched this to the bottom, but based on the email
exchange, it seems that keeping getreadbuf and special-casing s# for
Unicode objects makes the most sense.  That makes the 's' and 's#'
more similar.  Note that 'z#' should also be fixed.

I believe that SRE uses PyObject_AsReadBuffer() so that it can work
with arrays of shorts as well (when shorts are two chars).  Kind of
cute.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal at lemburg.com  Thu Sep 21 19:16:17 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 19:16:17 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>  
	            <200009211619.SAA00737@loewis.home.cs.tu-berlin.de> <200009211744.MAA17168@cj20424-a.reston1.va.home.com>
Message-ID: <39CA4261.2B586B3F@lemburg.com>

Guido van Rossum wrote:
> 
> I haven't researched this to the bottom, but based on the email
> exchange, it seems that keeping getreadbuf and special-casing s# for
> Unicode objects makes the most sense.  That makes the 's' and 's#'
> more similar.  Note that 'z#' should also be fixed.
> 
> I believe that SRE uses PyObject_AsReadBuffer() so that it can work
> with arrays of shorts as well (when shorts are two chars).  Kind of
> cute.

Ok, I'll check in a patch for special casing Unicode object
in getarg.c's "s#" later today.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Thu Sep 21 23:28:47 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 23:28:47 +0200
Subject: [Python-Dev] Versioning for Python packages
References: <200009192300.RAA01451@localhost.localdomain> <39C87B69.DD0D2DC9@lemburg.com> <200009201507.KAA04851@cj20424-a.reston1.va.home.com>  
	            <39C8CEB5.65A70BBE@lemburg.com> <200009211538.KAA08180@cj20424-a.reston1.va.home.com>
Message-ID: <39CA7D8F.633E74D6@lemburg.com>

[Moved to python-dev from xml-sig]

Guido van Rossum wrote:
> 
> > Perhaps a good start would be using lib/python-2.0.0 as installation
> > target rather than just lib/python2. I'm sure this was discussed
> > before, but given the problems we had with this during the 1.5
> > cycle (with 1.5.2 providing not only patches, but also new
> > features), I think a more fine-grained approach should be
> > considered for future versions.
> 
> We're using lib/python2.0, and we plan not to make major releases with
> a 3rd level version number increment!  SO I think that's not necessary.

Ah, that's good news :-)
 
> > About package versioning: how would the version be specified
> > in imports ?
> >
> > from mx.DateTime(1.4.0) import now
> > from mx(1.0.0).DateTime import now
> > from mx(1.0.0).DateTime(1.4.0) import now
> >
> > The directory layout would then look something like this:
> >
> > mx/
> >       1.0.0/
> >               DateTime/
> >                       1.4.0/
> >
> > Package __path__ hooks could be used to implement the
> > lookup... or of course some new importer.
> >
> > But what happens if there is no (old) version mx-1.0.0 installed ?
> > Should Python then default to mx-1.3.0 which is installed or
> > raise an ImportError ?
> >
> > This sounds like trouble... ;-)
> 
> You've got it.  Please move this to python-dev.  It's good PEP
> material!

Done.
 
> > > > We will have a similar problem with Unicode and the stdlib
> > > > during the Python 2.0 cycle: people will want to use Unicode
> > > > together with the stdlib, yet many modules in the stdlib
> > > > don't support Unicode. To remedy this, users will have to
> > > > patch the stdlib modules and put them somewhere so that they
> > > > can override the original 2.0 ones.
> > >
> > > They can use $PYTHONPATH.
> >
> > True, but why not help them a little by letting site
> > installations override the stdlib ? After all, distutils
> > standard target is site-packages.
> 
> Overrides of the stdlib are dangerous in general and should not be
> encouraged.
> 
> > > > BTW, with distutils coming on strong I don't really see a
> > > > need for any hacks: instead distutils should be given some
> > > > smart logic to do the right thing, ie. it should support
> > > > installing subpackages of a package. If that's not desired,
> > > > then I'd opt for overriding the whole package (without any
> > > > hacks to import the overridden one).
> > >
> > > That's another possibility.  But then distutils will have to become
> > > aware of package versions again.
> >
> > This shouldn't be hard to add to the distutils processing:
> > before starting an installation of a package, the package
> > pre-install hook could check which versions are installed
> > and then decide whether to raise an exception or continue.
> 
> Here's another half-baked idea about versions: perhaps packages could
> have a __version__.py file?

Hmm, I usually put a __version__ attribute right into the
__init__.py file of the package -- why another file ?

I think we should come up with a convention on these
meta-attributes. They are useful for normal modules
as well, e.g. __version__, __copyright__, __author__, etc.

Looks like its PEP-time again ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy at beopen.com  Fri Sep 22 22:29:18 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 22 Sep 2000 16:29:18 -0400 (EDT)
Subject: [Python-Dev] Sunday code freeze
Message-ID: <14795.49438.749774.32159@bitdiddle.concentric.net>

We will need about a day to prepare the 2.0b2 release.  Thus, all
changes need to be committed by the end of the day on Sunday.  A code
freeze will be in effect starting then.

Please try to resolve any patches or bugs assigned to you before the
code freeze.

Jeremy


From thomas at xs4all.net  Sat Sep 23 14:26:51 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 23 Sep 2000 14:26:51 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.19,1.20
In-Reply-To: <200009230440.VAA11540@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Fri, Sep 22, 2000 at 09:40:47PM -0700
References: <200009230440.VAA11540@slayer.i.sourceforge.net>
Message-ID: <20000923142651.A20757@xs4all.nl>

On Fri, Sep 22, 2000 at 09:40:47PM -0700, Fred L. Drake wrote:

> Modified Files:
> 	pep-0042.txt 
> Log Message:
> 
> Added request for a portable time.strptime() implementation.

As Tim noted, there already was a request for a separate implementation of
strptime(), though slightly differently worded. I've merged them.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one at email.msn.com  Sat Sep 23 22:44:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 23 Sep 2000 16:44:27 -0400
Subject: [Python-Dev] FW: Compiling Python 1.6 under MacOS X ...
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJLHHAA.tim_one@email.msn.com>

FYI.

-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Thelonious Georgia
Sent: Saturday, September 23, 2000 4:05 PM
To: python-list at python.org
Subject: Compiling Python 1.6 under MacOS X ...


Hey all-

I'm trying to get the 1.6 sources to compile under the public beta of MacOS
X. I ran ./configure, then make, and it does a pretty noble job of
compiling, up until I get:

cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
unicodectyc
cc: Internal compiler error: program cpp-precomp got fatal signal 11make[1]:
*** [unicodectype.o] Error 1
make: *** [Objects] Error 2
[dhcppc4:~/Python-1.6] root#

cc -v returns:
Reading specs from /usr/libexec/ppc/2.95.2/specs
Apple Computer, Inc. version cc-796.3, based on gcc driver version 2.7.2.1
exec2

I have searched high and low, but can find no mention of this particular
error (which makes sense, sure, because of how long the beta has been out),
but any help in getting around this particular error would be appreciated.

Theo


--
http://www.python.org/mailman/listinfo/python-list




From tim_one at email.msn.com  Sun Sep 24 01:31:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 23 Sep 2000 19:31:41 -0400
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com>

Dan, anyone can mail to python-dev at python.org.

Everyone else, this appears to be a followup on the Mac OSX compiler error.

Dan, I replied to that on comp.lang.python; if you have bugs to report
(platform-specific or otherwise) against the current CVS tree, SourceForge
is the best place to do it.  Since the 1.6 release is history, it's too late
to change anything there.

-----Original Message-----
From: Dan Wolfe [mailto:dkwolfe at pacbell.net]
Sent: Saturday, September 23, 2000 5:35 PM
To: tim_one at email.msn.com
Subject: regarding the Python Developer posting...


Howdy Tim,

I can't send to the development list so your gonna have to suffer... ;-)

With regards to:

<http://www.python.org/pipermail/python-dev/2000-September/016188.html>

>cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
>unicodectyc
>cc: Internal compiler error: program cpp-precomp got fatal signal
11make[1]:
>*** [unicodectype.o] Error 1
>make: *** [Objects] Error 2
>dhcppc4:~/Python-1.6] root#

I believe it's a bug in the cpp pre-comp as it also appears under 2.0.
I've been able to work around it by passing -traditional-cpp to the
compiler and it doesn't complain... ;-)  I'll take it up with Stan Steb
(the compiler guy) when I go into work on Monday.

Now if I can just figure out the test_sre.py, I'll be happy. (eg it
compiles and runs but is still not passing all the regression tests).

- Dan




From gvwilson at nevex.com  Sun Sep 24 16:26:37 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Sun, 24 Sep 2000 10:26:37 -0400 (EDT)
Subject: [Python-Dev] serializing Python as XML
Message-ID: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>

Hi, everyone.  One of the Software Carpentry designers has asked whether a
package exists to serialize Python data structures as XML, so that lists
of dictionaries of tuples of etc. can be exchanged with other XML-aware
tools.  Does this exist, even in pre-release form?  If not, I'd like to
hear from anyone who's already done any thinking in this direction.

Thanks,
Greg

p.s. has there ever been discussion about adding an '__xml__' method to
Python to augment the '__repr__' and '__str__' methods?





From fdrake at beopen.com  Sun Sep 24 16:27:55 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sun, 24 Sep 2000 10:27:55 -0400 (EDT)
Subject: [Python-Dev] serializing Python as XML
In-Reply-To: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
References: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
Message-ID: <14798.3947.965595.628569@cj42289-a.reston1.va.home.com>

Greg Wilson writes:
 > Hi, everyone.  One of the Software Carpentry designers has asked whether a
 > package exists to serialize Python data structures as XML, so that lists
 > of dictionaries of tuples of etc. can be exchanged with other XML-aware
 > tools.  Does this exist, even in pre-release form?  If not, I'd like to
 > hear from anyone who's already done any thinking in this direction.

  There are at least two implementations; I'm not sure of their exact
status.
  The PyXML contains something called xml.marshal, written by Andrew
Kuchling.  I've also seen something called Python xml_objectify (I
think) announced on Freshmeat.net.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gvwilson at nevex.com  Sun Sep 24 17:00:03 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Sun, 24 Sep 2000 11:00:03 -0400 (EDT)
Subject: [Python-Dev] installer difficulties
Message-ID: <Pine.LNX.4.10.10009241056300.14730-100000@akbar.nevex.com>

I just ran the "uninstall" that comes with BeOpen-Python-2.0b1.exe (the
September 8 version), then re-ran the installer.  A little dialog came up
saying "Corrupt installation detected", and the installer exits. Deleted
all of my g:\python2.0 files, all the registry entries, etc. --- same
behavior.

1. What is it looking at to determine whether the installation is corrupt?
   The installer itself, or my hard drive?  (If the former, my copy of the
   downloaded installer is 5,970,597 bytes long.)

2. What's the fix?

Thanks,
Greg





From skip at mojam.com  Sun Sep 24 17:19:10 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 24 Sep 2000 10:19:10 -0500 (CDT)
Subject: [Python-Dev] serializing Python as XML
In-Reply-To: <14798.3947.965595.628569@cj42289-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
	<14798.3947.965595.628569@cj42289-a.reston1.va.home.com>
Message-ID: <14798.7022.727038.770709@beluga.mojam.com>

    >> Hi, everyone.  One of the Software Carpentry designers has asked
    >> whether a package exists to serialize Python data structures as XML,
    >> so that lists of dictionaries of tuples of etc. can be exchanged with
    >> other XML-aware tools.

    Fred> There are at least two implementations ... PyXML & xml_objectify 

You can also use XML-RPC (http://www.xmlrpc.com/) or SOAP
(http://www.develop.com/SOAP/).  In Fredrik Lundh's xmlrpclib library
(http://www.pythonware.com/products/xmlrpc/) you can access the dump and
load functions without actually using the rest of the protocol if you like.
I suspect there are similar hooks in soaplib
(http://www.pythonware.com/products/soap/).

-- 
Skip Montanaro (skip at mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/



From tim_one at email.msn.com  Sun Sep 24 19:55:15 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 13:55:15 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <Pine.LNX.4.10.10009241056300.14730-100000@akbar.nevex.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELDHHAA.tim_one@email.msn.com>

[posted & mailed]

[Greg Wilson]
> I just ran the "uninstall" that comes with BeOpen-Python-2.0b1.exe (the
> September 8 version), then re-ran the installer.  A little dialog came up
> saying "Corrupt installation detected", and the installer exits. Deleted
> all of my g:\python2.0 files, all the registry entries, etc. --- same
> behavior.
>
> 1. What is it looking at to determine whether the installation is
>    corrupt?

While I built the installer, I have no idea!  It's an internal function of
the Wise software, and-- you guessed it <wink> --that's closed-source.  I
*believe* it's failing an internal consistency check, and that's all.

>    The installer itself, or my hard drive?  (If the former, my copy
>    of the downloaded installer is 5,970,597 bytes long.)

That is the correct size.

> 2. What's the fix?

Dunno.  It's a new one on me, and I uninstall and reinstall many times each
week.  Related things occasionally pop up on Python-Help, and is usually
fixed there by asking the victim to try downloading again with some other
program (Netscape instead of IE, or vice versa, or FTP, or GetRight, ...).

Here's a better check, provided you have *some* version of Python sitting
around:

>>> path = "/updates/BeOpen-Python-2.0b1.exe" # change accordingly
>>> import os
>>> os.path.getsize(path)
5970597
>>> guts = open(path, "rb").read()
>>> len(guts)
5970597
>>> import sha
>>> print sha.new(guts).hexdigest()
ef495d351a93d887f5df6b399747d4e96388b0d5
>>>

If you don't get the same SHA digest, it is indeed corrupt despite having
the correct size.  Let us know!





From martin at loewis.home.cs.tu-berlin.de  Sun Sep 24 19:56:04 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 24 Sep 2000 19:56:04 +0200
Subject: [Python-Dev] serializing Python as XML
Message-ID: <200009241756.TAA00735@loewis.home.cs.tu-berlin.de>

> whether a package exists to serialize Python data structures as XML,

Zope has a variant of pickle where pickles follow an XML DTD (i.e. it
pickles into XML). I believe the current implementation first pickles
into an ASCII pickle and reformats that as XML afterwards, but that is
an implementation issue.

> so that lists of dictionaries of tuples of etc. can be exchanged
> with other XML-aware tools.

See, this is one of the common XML pitfalls. Even though the output of
that is well-formed XML, and even though there is an imaginary DTD (*)
which this XML could be validated against: it is still unlikely that
other XML-aware tools could make much use of the format, at least if
the original Python contained some "interesting" objects
(e.g. instance objects). Even with only dictionaries of tuples: The
Zope DTD supports cyclic structures; it would not be straight-forward
to support the back-referencing in structure in some other tool
(although certainly possible).

XML alone does not give interoperability. You need some agreed-upon
DTD for that. If that other XML-aware tool is willing to adopt to a
Python-provided DTD - why couldn't it read Python pickles in the first
place?

Regards,
Martin

(*) There have been repeated promises of actually writing down the DTD
some day.



From tim_one at email.msn.com  Sun Sep 24 20:47:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 14:47:11 -0400
Subject: [Python-Dev] How about braindead Unicode "compression"?
Message-ID: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>

unicodedatabase.c has 64K lines of the form:

/* U+009a */ { 13, 0, 15, 0, 0 },

Each struct getting initialized there takes 8 bytes on most machines (4
unsigned chars + a char*).

However, there are only 3,567 unique structs (54,919 of them are all 0's!).
So a braindead-easy mechanical "compression" scheme would simply be to
create one vector with the 3,567 unique structs, and replace the 64K record
constructors with 2-byte indices into that vector.  Data size goes down from

    64K * 8b = 512Kb

to

    3567 * 8b + 64K * 2b ~= 156Kb

at once; the source-code transformation is easy to do via a Python program;
the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
can go away; and one indirection is added to access (which remains utterly
uniform).

Previous objections to compression were, as far as I could tell, based on
fear of elaborate schemes that rendered the code unreadable and the access
code excruciating.  But if we can get more than a factor of 3 with little
work and one new uniform indirection, do people still object?

If nobody objects by the end of today, I intend to do it.





From tim_one at email.msn.com  Sun Sep 24 22:26:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 16:26:40 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELDHHAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com>

[Tim]
> ...
> Here's a better check, provided you have *some* version of Python sitting
> around:
>
> >>> path = "/updates/BeOpen-Python-2.0b1.exe" # change accordingly
> >>> import os
> >>> os.path.getsize(path)
> 5970597
> >>> guts = open(path, "rb").read()
> >>> len(guts)
> 5970597
> >>> import sha
> >>> print sha.new(guts).hexdigest()
> ef495d351a93d887f5df6b399747d4e96388b0d5
> >>>
>
> If you don't get the same SHA digest, it is indeed corrupt despite having
> the correct size.  Let us know!

Greg reports getting

  e65aac55368b823e1c0bc30c0a5bc4dd2da2adb4

Someone else care to try this?  I tried it both on the original installer I
uploaded to BeOpen, and on the copy I downloaded back from the pythonlabs
download page right after Fred updated it.  At this point I don't know
whether BeOpen's disk is corrupted, or Greg's, or sha has a bug, or ...





From guido at beopen.com  Sun Sep 24 23:47:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 16:47:52 -0500
Subject: [Python-Dev] How about braindead Unicode "compression"?
In-Reply-To: Your message of "Sun, 24 Sep 2000 14:47:11 -0400."
             <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com> 
Message-ID: <200009242147.QAA06557@cj20424-a.reston1.va.home.com>

> unicodedatabase.c has 64K lines of the form:
> 
> /* U+009a */ { 13, 0, 15, 0, 0 },
> 
> Each struct getting initialized there takes 8 bytes on most machines (4
> unsigned chars + a char*).
> 
> However, there are only 3,567 unique structs (54,919 of them are all 0's!).
> So a braindead-easy mechanical "compression" scheme would simply be to
> create one vector with the 3,567 unique structs, and replace the 64K record
> constructors with 2-byte indices into that vector.  Data size goes down from
> 
>     64K * 8b = 512Kb
> 
> to
> 
>     3567 * 8b + 64K * 2b ~= 156Kb
> 
> at once; the source-code transformation is easy to do via a Python program;
> the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
> can go away; and one indirection is added to access (which remains utterly
> uniform).
> 
> Previous objections to compression were, as far as I could tell, based on
> fear of elaborate schemes that rendered the code unreadable and the access
> code excruciating.  But if we can get more than a factor of 3 with little
> work and one new uniform indirection, do people still object?
> 
> If nobody objects by the end of today, I intend to do it.

Go for it!  I recall seeing that file and thinking the same thing.

(Isn't the VC++ compiler warning about line numbers > 64K?  Then you'd
have to put two pointers on one line to make it go away, regardless of
the size of the generated object code.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sun Sep 24 23:58:53 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 16:58:53 -0500
Subject: [Python-Dev] installer difficulties
In-Reply-To: Your message of "Sun, 24 Sep 2000 16:26:40 -0400."
             <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com> 
Message-ID: <200009242158.QAA06679@cj20424-a.reston1.va.home.com>

>   e65aac55368b823e1c0bc30c0a5bc4dd2da2adb4
> 
> Someone else care to try this?  I tried it both on the original installer I
> uploaded to BeOpen, and on the copy I downloaded back from the pythonlabs
> download page right after Fred updated it.  At this point I don't know
> whether BeOpen's disk is corrupted, or Greg's, or sha has a bug, or ...

I just downloaded it again and tried your code, and got the same value
as Greg!  I also get Greg's error on Windows with the newly downloaded
version.

Conclusion: the new Zope-ified site layout has a corrupt file.

I'll try to get in touch with the BeOpen web developers right away!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Sun Sep 24 23:20:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 24 Sep 2000 23:20:06 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>
Message-ID: <39CE7006.D60A603D@lemburg.com>

Tim Peters wrote:
> 
> unicodedatabase.c has 64K lines of the form:
> 
> /* U+009a */ { 13, 0, 15, 0, 0 },
> 
> Each struct getting initialized there takes 8 bytes on most machines (4
> unsigned chars + a char*).
> 
> However, there are only 3,567 unique structs (54,919 of them are all 0's!).

That's because there are only around 11k definitions in the
Unicode database -- most of the rest is divided into private,
user defined and surrogate high/low byte reserved ranges.

> So a braindead-easy mechanical "compression" scheme would simply be to
> create one vector with the 3,567 unique structs, and replace the 64K record
> constructors with 2-byte indices into that vector.  Data size goes down from
> 
>     64K * 8b = 512Kb
> 
> to
> 
>     3567 * 8b + 64K * 2b ~= 156Kb
> 
> at once; the source-code transformation is easy to do via a Python program;
> the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
> can go away; and one indirection is added to access (which remains utterly
> uniform).
> 
> Previous objections to compression were, as far as I could tell, based on
> fear of elaborate schemes that rendered the code unreadable and the access
> code excruciating.  But if we can get more than a factor of 3 with little
> work and one new uniform indirection, do people still object?

Oh, there was no fear about making the code unreadable...
Christian and Fredrik were both working on smart schemes.
My only objection about these was missing documentation
and generation tools -- vast tables of completely random
looking byte data are unreadable ;-)
 
> If nobody objects by the end of today, I intend to do it.

+1 from here.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Sun Sep 24 23:25:34 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 17:25:34 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <200009242158.QAA06679@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com>

[Guido]
> I just downloaded it again and tried your code, and got the same value
> as Greg!  I also get Greg's error on Windows with the newly downloaded
> version.
>
> Conclusion: the new Zope-ified site layout has a corrupt file.
>
> I'll try to get in touch with the BeOpen web developers right away!

Thanks!  In the meantime, I pointed Greg to anonymous FTP at
python.beopen.com, in directory /pub/tmp/.  That's where I orginally
uploaded the installer, and I doubt our webmasters have had a chance to
corrupt it yet <0.9 wink>.





From mal at lemburg.com  Sun Sep 24 23:28:29 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 24 Sep 2000 23:28:29 +0200
Subject: [Python-Dev] FW: regarding the Python Developer posting...
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com>
Message-ID: <39CE71FD.8858B71D@lemburg.com>

Tim Peters wrote:
> 
> Dan, anyone can mail to python-dev at python.org.
> 
> Everyone else, this appears to be a followup on the Mac OSX compiler error.
> 
> Dan, I replied to that on comp.lang.python; if you have bugs to report
> (platform-specific or otherwise) against the current CVS tree, SourceForge
> is the best place to do it.  Since the 1.6 release is history, it's too late
> to change anything there.
> 
> -----Original Message-----
> From: Dan Wolfe [mailto:dkwolfe at pacbell.net]
> Sent: Saturday, September 23, 2000 5:35 PM
> To: tim_one at email.msn.com
> Subject: regarding the Python Developer posting...
> 
> Howdy Tim,
> 
> I can't send to the development list so your gonna have to suffer... ;-)
> 
> With regards to:
> 
> <http://www.python.org/pipermail/python-dev/2000-September/016188.html>
> 
> >cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
> >unicodectyc
> >cc: Internal compiler error: program cpp-precomp got fatal signal
> 11make[1]:
> >*** [unicodectype.o] Error 1
> >make: *** [Objects] Error 2
> >dhcppc4:~/Python-1.6] root#
> 
> I believe it's a bug in the cpp pre-comp as it also appears under 2.0.
> I've been able to work around it by passing -traditional-cpp to the
> compiler and it doesn't complain... ;-)  I'll take it up with Stan Steb
> (the compiler guy) when I go into work on Monday.

You could try to enable the macro at the top of unicodectype.c:
 
#if defined(macintosh) || defined(MS_WIN64)
/*XXX This was required to avoid a compiler error for an early Win64
 * cross-compiler that was used for the port to Win64. When the platform is
 * released the MS_WIN64 inclusion here should no longer be necessary.
 */
/* This probably needs to be defined for some other compilers too. It breaks the
** 5000-label switch statement up into switches with around 1000 cases each.
*/
#define BREAK_SWITCH_UP return 1; } switch (ch) {
#else
#define BREAK_SWITCH_UP /* nothing */
#endif

If it does compile with the work-around enabled, please
give us a set of defines which identify the compiler and
platform so we can enable it per default for your setup.

> Now if I can just figure out the test_sre.py, I'll be happy. (eg it
> compiles and runs but is still not passing all the regression tests).

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Mon Sep 25 00:34:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 17:34:28 -0500
Subject: [Python-Dev] installer difficulties
In-Reply-To: Your message of "Sun, 24 Sep 2000 17:25:34 -0400."
             <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com> 
Message-ID: <200009242234.RAA06931@cj20424-a.reston1.va.home.com>

> Thanks!  In the meantime, I pointed Greg to anonymous FTP at
> python.beopen.com, in directory /pub/tmp/.  That's where I orginally
> uploaded the installer, and I doubt our webmasters have had a chance to
> corrupt it yet <0.9 wink>.

Other readers of this forum may find that there is other cruft there
that may appear useful; however I believe the files found there may
not be the correct versions either.

BTW, the source tarball on the new pythonlabs.com site is also
corrupt; the docs are bad links; I suspect that the RPMs are also
corrupt.  What an embarrassment.  (We proofread all the webpages but
never thought of testing the downloads!)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Sun Sep 24 23:39:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 17:39:49 -0400
Subject: [Python-Dev] How about braindead Unicode "compression"?
In-Reply-To: <39CE7006.D60A603D@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>

[Tim]
>> Previous objections to compression were, as far as I could
>> tell, based on fear of elaborate schemes that rendered the code
>> unreadable and the access code excruciating.  But if we can get
>> more than a factor of 3 with little work and one new uniform
>> indirection, do people still object?

[M.-A. Lemburg]
> Oh, there was no fear about making the code unreadable...
> Christian and Fredrik were both working on smart schemes.
> My only objection about these was missing documentation
> and generation tools -- vast tables of completely random
> looking byte data are unreadable ;-)

OK, you weren't afraid of making the code unreadable, but you did object to
making it unreadable.  Got it <wink>.  My own view is that the C data table
source code "should be" generated by a straightforward Python program
chewing over the unicode.org data files.  But since that's the correct view,
I'm sure it's yours too.

>> If nobody objects by the end of today, I intend to do it.

> +1 from here.

/F and I talked about it offline.  We'll do *something* before the day is
done, and I suspect everyone will be happy.  Waiting for a superb scheme has
thus far stopped us from making any improvements at all, and at this late
point a Big Crude Yet Delicate Hammer is looking mighty attractive.

petitely y'rs  - tim





From effbot at telia.com  Mon Sep 25 00:01:06 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 00:01:06 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>
Message-ID: <008f01c02672$f3f1a100$766940d5@hagrid>

tim wrote:
> /F and I talked about it offline.  We'll do *something* before the day is
> done, and I suspect everyone will be happy.

Okay, I just went ahead and checked in a new version of the
unicodedata stuff, based on my earlier unidb work.

On windows, the new unicodedata PYD is 120k (down from 600k),
and the source distribution should be about 2 megabytes smaller
than before (!).

If you're on a non-windows platform, please try out the new code
as soon as possible.  You need to check out:

        Modules/unicodedata.c
        Modules/unicodedatabase.c
        Modules/unicodedatabase.h
        Modules/unicodedata_db.h (new file)

Let me know if there are any build problems.

I'll check in the code generator script as soon as I've figured out
where to put it...  (how about Tools/unicode?)

</F>




From mal at lemburg.com  Mon Sep 25 09:57:36 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 09:57:36 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>
Message-ID: <39CF0570.FDDCF03C@lemburg.com>

Tim Peters wrote:
> 
> [Tim]
> >> Previous objections to compression were, as far as I could
> >> tell, based on fear of elaborate schemes that rendered the code
> >> unreadable and the access code excruciating.  But if we can get
> >> more than a factor of 3 with little work and one new uniform
> >> indirection, do people still object?
> 
> [M.-A. Lemburg]
> > Oh, there was no fear about making the code unreadable...
> > Christian and Fredrik were both working on smart schemes.
> > My only objection about these was missing documentation
> > and generation tools -- vast tables of completely random
> > looking byte data are unreadable ;-)
> 
> OK, you weren't afraid of making the code unreadable, but you did object to
> making it unreadable.  Got it <wink>. 

Ah yes, the old coffee syndrom again (or maybe just the jet-lag
watching Olympia in the very early morning hours).

What I meant was that I consider checking in unreadable
binary goop *without* documentation and generation tools
not a good idea. Now that Fredrik checked in the generators
as well, everything is fine.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Mon Sep 25 15:56:17 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 15:56:17 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules posixmodule.c,2.173,2.174
In-Reply-To: <200009251322.GAA21574@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Mon, Sep 25, 2000 at 06:22:04AM -0700
References: <200009251322.GAA21574@slayer.i.sourceforge.net>
Message-ID: <20000925155616.H20757@xs4all.nl>

On Mon, Sep 25, 2000 at 06:22:04AM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src/Modules
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv21486
> 
> Modified Files:
> 	posixmodule.c 
> Log Message:
> Add missing prototypes for the benefit of SunOS 4.1.4 */

These should go in pyport.h ! Unless you have some reason not to export them
to other file, but in that case we need to take a good look at the whole
pyport.h thing.

> + #if defined(sun) && !defined(__SVR4)
> + /* SunOS 4.1.4 doesn't have prototypes for these: */
> + extern int rename(const char *, const char *);
> + extern int pclose(FILE *);
> + extern int fclose(FILE *);
> + #endif
> + 


-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jim at interet.com  Mon Sep 25 15:55:56 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 09:55:56 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <39CF596C.17BA4DC5@interet.com>

Martin von Loewis wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries to
> > continue as far as possible (much like make -k) ?
> 
> The common approch is to insert or remove tokens, using some
> heuristics. In YACC, it is possible to add error productions to the
> grammar. Whenever an error occurs, the parser assigns all tokens to
> the "error" non-terminal until it concludes that it can perform a
> reduce action.

The following is based on trying (a great learning experience)
to write a better Python lint.

There are IMHO two problems with the current Python
grammar file.  It is not possible to express operator
precedence, so deliberate shift/reduce conflicts are
used instead.  That makes the parse tree complicated
and non intuitive.  And there is no provision for error
productions.  YACC has both of these as built-in features.

I also found speed problems with tokenize.py.  AFAIK,
it only exists because tokenizer.c does not provide
comments as tokens, but eats them instead.  We could
modify tokenizer.c, then make tokenize.py be the
interface to the fast C tokenizer.  This eliminates the
problem of updating both too.

So how about re-writing the Python grammar in YACC in
order to use its more advanced features??  The simple
YACC grammar I wrote for 1.5.2 plus an altered tokenizer.c
parsed the whole Lib/*.py in a couple seconds vs. 30
seconds for the first file using Aaron Watters' Python
lint grammar written in Python.

JimA



From bwarsaw at beopen.com  Mon Sep 25 16:18:36 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 25 Sep 2000 10:18:36 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
Message-ID: <14799.24252.537090.326130@anthem.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

    JCA> So how about re-writing the Python grammar in YACC in
    JCA> order to use its more advanced features??  The simple
    JCA> YACC grammar I wrote for 1.5.2 plus an altered tokenizer.c
    JCA> parsed the whole Lib/*.py in a couple seconds vs. 30
    JCA> seconds for the first file using Aaron Watters' Python
    JCA> lint grammar written in Python.

I've been wanting to check out Antlr (www.antlr.org) because it gives
us the /possibility/ to use the same grammar files for both CPython
and JPython.  One problem though is that it generates Java and C++ so
we'd be accepting our first C++ into the core if we went this route.

-Barry



From gward at mems-exchange.org  Mon Sep 25 16:40:09 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 10:40:09 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39C8C834.5E3B90E7@lemburg.com>; from mal@lemburg.com on Wed, Sep 20, 2000 at 04:22:44PM +0200
References: <39C8C834.5E3B90E7@lemburg.com>
Message-ID: <20000925104009.A1747@ludwig.cnri.reston.va.us>

On 20 September 2000, M.-A. Lemburg said:
> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries
> to continue as far as possible (much like make -k) ?
> 
> If yes, could the existing Python parser/compiler be reused for
> such a tool ?


From gward at mems-exchange.org  Mon Sep 25 16:43:10 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 10:43:10 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <14799.24252.537090.326130@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 25, 2000 at 10:18:36AM -0400
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net>
Message-ID: <20000925104310.B1747@ludwig.cnri.reston.va.us>

On 25 September 2000, Barry A. Warsaw said:
> I've been wanting to check out Antlr (www.antlr.org) because it gives
> us the /possibility/ to use the same grammar files for both CPython
> and JPython.  One problem though is that it generates Java and C++ so
> we'd be accepting our first C++ into the core if we went this route.

Or contribute a C back-end to ANTLR -- I've been toying with this idea
for, ummm, too damn long now.  Years.

        Greg



From jeremy at beopen.com  Mon Sep 25 16:50:30 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 10:50:30 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CF596C.17BA4DC5@interet.com>
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
Message-ID: <14799.26166.965015.344977@bitdiddle.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

  JCA> The following is based on trying (a great learning experience)
  JCA> to write a better Python lint.

  JCA> There are IMHO two problems with the current Python grammar
  JCA> file.  It is not possible to express operator precedence, so
  JCA> deliberate shift/reduce conflicts are used instead.  That makes
  JCA> the parse tree complicated and non intuitive.  And there is no
  JCA> provision for error productions.  YACC has both of these as
  JCA> built-in features.

  JCA> I also found speed problems with tokenize.py.  AFAIK, it only
  JCA> exists because tokenizer.c does not provide comments as tokens,
  JCA> but eats them instead.  We could modify tokenizer.c, then make
  JCA> tokenize.py be the interface to the fast C tokenizer.  This
  JCA> eliminates the problem of updating both too.

  JCA> So how about re-writing the Python grammar in YACC in order to
  JCA> use its more advanced features??  The simple YACC grammar I
  JCA> wrote for 1.5.2 plus an altered tokenizer.c parsed the whole
  JCA> Lib/*.py in a couple seconds vs. 30 seconds for the first file
  JCA> using Aaron Watters' Python lint grammar written in Python.

Why not use the actual Python parser instead of tokenize.py?  I assume
it is also faster than Aaron's Python lint grammer written in Python.
The compiler in Tools/compiler uses the parser module internally and
produces an AST that is straightforward to use.  (The parse tree
produced by the parser module is fairly low-level.)

There was a thread (on the compiler-sig, I believe) where Moshe and I
noodled with a simple lint-like warnings framework based on the
compiler package.  I don't have the code we ended up with, but I found
an example checker in the mail archives and have included it below.
It checks for NameErrors.

I believe one useful change that Moshe and I arrived at was to avoid
the explicit stack that the code uses (via enterNamespace and
exitNamespace) and instead pass the namespace as an optional extra
argument to the visitXXX methods.

Jeremy

"""Check for NameErrors"""

from compiler import parseFile, walk
from compiler.misc import Stack, Set

import __builtin__
from UserDict import UserDict

class Warning:
    def __init__(self, filename, funcname, lineno):
        self.filename = filename
        self.funcname = funcname
        self.lineno = lineno

    def __str__(self):
        return self._template % self.__dict__

class UndefinedLocal(Warning):
    super_init = Warning.__init__
    
    def __init__(self, filename, funcname, lineno, name):
        self.super_init(filename, funcname, lineno)
        self.name = name

    _template = "%(filename)s:%(lineno)s  " \
                "%(funcname)s undefined local %(name)s"

class NameError(UndefinedLocal):
    _template = "%(filename)s:%(lineno)s  " \
                "%(funcname)s undefined name %(name)s"

class NameSet(UserDict):
    """Track names and the line numbers where they are referenced"""
    def __init__(self):
        self.data = self.names = {}

    def add(self, name, lineno):
        l = self.names.get(name, [])
        l.append(lineno)
        self.names[name] = l

class CheckNames:
    def __init__(self, filename):
        self.filename = filename
        self.warnings = []
        self.scope = Stack()
        self.gUse = NameSet()
        self.gDef = NameSet()
        # _locals is the stack of local namespaces
        # locals is the top of the stack
        self._locals = Stack()
        self.lUse = None
        self.lDef = None
        self.lGlobals = None # var declared global
        # holds scope,def,use,global triples for later analysis
        self.todo = []

    def enterNamespace(self, node):
        self.scope.push(node)
        self.lUse = use = NameSet()
        self.lDef = _def = NameSet()
        self.lGlobals = gbl = NameSet()
        self._locals.push((use, _def, gbl))

    def exitNamespace(self):
        self.todo.append((self.scope.top(), self.lDef, self.lUse,
                          self.lGlobals))
        self.scope.pop()
        self._locals.pop()
        if self._locals:
            self.lUse, self.lDef, self.lGlobals = self._locals.top()
        else:
            self.lUse = self.lDef = self.lGlobals = None

    def warn(self, warning, funcname, lineno, *args):
        args = (self.filename, funcname, lineno) + args
        self.warnings.append(apply(warning, args))

    def defName(self, name, lineno, local=1):
        if self.lUse is None:
            self.gDef.add(name, lineno)
        elif local == 0:
            self.gDef.add(name, lineno)
            self.lGlobals.add(name, lineno)
        else:
            self.lDef.add(name, lineno)

    def useName(self, name, lineno, local=1):
        if self.lUse is None:
            self.gUse.add(name, lineno)
        elif local == 0:
            self.gUse.add(name, lineno)
            self.lUse.add(name, lineno)            
        else:
            self.lUse.add(name, lineno)

    def check(self):
        for s, d, u, g in self.todo:
            self._check(s, d, u, g, self.gDef)
        # XXX then check the globals

    def _check(self, scope, _def, use, gbl, globals):
        # check for NameError
        # a name is defined iff it is in def.keys()
        # a name is global iff it is in gdefs.keys()
        gdefs = UserDict()
        gdefs.update(globals)
        gdefs.update(__builtin__.__dict__)
        defs = UserDict()
        defs.update(gdefs)
        defs.update(_def)
        errors = Set()
        for name in use.keys():
            if not defs.has_key(name):
                firstuse = use[name][0]
                self.warn(NameError, scope.name, firstuse, name)
                errors.add(name)

        # check for UndefinedLocalNameError
        # order == use & def sorted by lineno
        # elements are lineno, flag, name
        # flag = 0 if use, flag = 1 if def
        order = []
        for name, lines in use.items():
            if gdefs.has_key(name) and not _def.has_key(name):
                # this is a global ref, we can skip it
                continue
            for lineno in lines:
                order.append((lineno, 0, name))
        for name, lines in _def.items():
            for lineno in lines:
                order.append((lineno, 1, name))
        order.sort()
        # ready contains names that have been defined or warned about
        ready = Set()
        for lineno, flag, name in order:
            if flag == 0: # use
                if not ready.has_elt(name) and not errors.has_elt(name):
                    self.warn(UndefinedLocal, scope.name, lineno, name)
                    ready.add(name) # don't warn again
            else:
                ready.add(name)

    # below are visitor methods

    def visitFunction(self, node, noname=0):
        for expr in node.defaults:
            self.visit(expr)
        if not noname:
            self.defName(node.name, node.lineno)
        self.enterNamespace(node)
        for name in node.argnames:
            self.defName(name, node.lineno)
        self.visit(node.code)
        self.exitNamespace()
        return 1

    def visitLambda(self, node):
        return self.visitFunction(node, noname=1)

    def visitClass(self, node):
        for expr in node.bases:
            self.visit(expr)
        self.defName(node.name, node.lineno)
        self.enterNamespace(node)
        self.visit(node.code)
        self.exitNamespace()
        return 1

    def visitName(self, node):
        self.useName(node.name, node.lineno)

    def visitGlobal(self, node):
        for name in node.names:
            self.defName(name, node.lineno, local=0)

    def visitImport(self, node):
        for name, alias in node.names:
            self.defName(alias or name, node.lineno)

    visitFrom = visitImport

    def visitAssName(self, node):
        self.defName(node.name, node.lineno)
    
def check(filename):
    global p, checker
    p = parseFile(filename)
    checker = CheckNames(filename)
    walk(p, checker)
    checker.check()
    for w in checker.warnings:
        print w

if __name__ == "__main__":
    import sys

    # XXX need to do real arg processing
    check(sys.argv[1])




From nascheme at enme.ucalgary.ca  Mon Sep 25 16:57:42 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 25 Sep 2000 08:57:42 -0600
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925104009.A1747@ludwig.cnri.reston.va.us>; from Greg Ward on Mon, Sep 25, 2000 at 10:40:09AM -0400
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us>
Message-ID: <20000925085742.A26922@keymaster.enme.ucalgary.ca>

On Mon, Sep 25, 2000 at 10:40:09AM -0400, Greg Ward wrote:
> PCCTS 1.x (the precursor to ANTLR 2.x) is the only parser generator
> I've used personally

How different are PCCTS and ANTLR?  Perhaps we could use PCCTS for
CPython and ANTLR for JPython.

  Neil



From guido at beopen.com  Mon Sep 25 18:06:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 11:06:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules posixmodule.c,2.173,2.174
In-Reply-To: Your message of "Mon, 25 Sep 2000 15:56:17 +0200."
             <20000925155616.H20757@xs4all.nl> 
References: <200009251322.GAA21574@slayer.i.sourceforge.net>  
            <20000925155616.H20757@xs4all.nl> 
Message-ID: <200009251606.LAA19626@cj20424-a.reston1.va.home.com>

> > Modified Files:
> > 	posixmodule.c 
> > Log Message:
> > Add missing prototypes for the benefit of SunOS 4.1.4 */
> 
> These should go in pyport.h ! Unless you have some reason not to export them
> to other file, but in that case we need to take a good look at the whole
> pyport.h thing.
> 
> > + #if defined(sun) && !defined(__SVR4)
> > + /* SunOS 4.1.4 doesn't have prototypes for these: */
> > + extern int rename(const char *, const char *);
> > + extern int pclose(FILE *);
> > + extern int fclose(FILE *);
> > + #endif
> > + 

Maybe, but tyere's already tons of platform specific junk in
posixmodule.c.  Given we're so close to the code freeze, let's not do
it right now.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jim at interet.com  Mon Sep 25 17:05:56 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 11:05:56 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
		<39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net>
Message-ID: <39CF69D4.E3649C69@interet.com>

"Barry A. Warsaw" wrote:
> I've been wanting to check out Antlr (www.antlr.org) because it gives
> us the /possibility/ to use the same grammar files for both CPython
> and JPython.  One problem though is that it generates Java and C++ so
> we'd be accepting our first C++ into the core if we went this route.

Yes, but why not YACC?  Is Antlr so much better, or is
YACC too primitive, or what?  IMHO, adding C++ just for
parsing is not going to happen, so Antlr is not going to
happen either.

JimA



From gward at mems-exchange.org  Mon Sep 25 17:07:53 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 11:07:53 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925085742.A26922@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Mon, Sep 25, 2000 at 08:57:42AM -0600
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us> <20000925085742.A26922@keymaster.enme.ucalgary.ca>
Message-ID: <20000925110752.A1891@ludwig.cnri.reston.va.us>

On 25 September 2000, Neil Schemenauer said:
> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS for
> CPython and ANTLR for JPython.

I can't speak from experience; I've only looked briefly at ANTLR.  But
it looks like they are as different as two LL(k) parser generators
written by the same guy can be.  Ie. same general philosophy, but not
much similar apart from that.

Also, to be blunt, the C back-end PCCTS 1.x has a lot of serious
problems.  It's heavily dependent on global variables, so goodbye to a
thread-safe lexer/parser.  It uses boatloads of tricky macros, which
makes debugging the lexer a bear.  It's well-nigh impossible to remember
which macros are defined in which .c files, which functions are defined
in which .h files, and so forth.  (No really! it's like that!)

I think it would be much healthier to take the sound OO thinking that
went into the original C++ back-end for PCCTS 1.x, and that evolved
further with the Java and C++ back-ends for ANTLR 2.x, and do the same
sort of stuff in C.  Writing good solid code in C isn't impossible, it's
just tricky.  And the code generated by PCCTS 1.x is *not* good solid C
code (IMHO).

        Greg



From cgw at fnal.gov  Mon Sep 25 17:12:35 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 25 Sep 2000 10:12:35 -0500 (CDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925085742.A26922@keymaster.enme.ucalgary.ca>
References: <39C8C834.5E3B90E7@lemburg.com>
	<20000925104009.A1747@ludwig.cnri.reston.va.us>
	<20000925085742.A26922@keymaster.enme.ucalgary.ca>
Message-ID: <14799.27491.414160.577996@buffalo.fnal.gov>

I think that as much as can be done with Python rather than using
external code like Antlr, the better.  Who cares if it is slow?  I
could imagine a 2-pass approach where the internal Python parser is
used to construct a parse tree which is then checked for certain
errors.  I wrote something like this to check for mismatched numbers
of '%' values and arguments in string-formatting operations (see
http://home.fnal.gov/~cgw/python/check_pct.html if you are
interested).

Only sections of code which cannot be parsed by Python's internal
parser would then need to be checked by the "stage 2" checker, which
could afford to give up speed for accuracy.  This is the part I think
should be done in Python... for all the reasons we like Python;
flexibility, maintainabilty, etc.





From bwarsaw at beopen.com  Mon Sep 25 17:23:40 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 25 Sep 2000 11:23:40 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
	<14799.24252.537090.326130@anthem.concentric.net>
	<20000925104310.B1747@ludwig.cnri.reston.va.us>
Message-ID: <14799.28156.687176.869540@anthem.concentric.net>

>>>>> "GW" == Greg Ward <gward at mems-exchange.org> writes:

    GW> Or contribute a C back-end to ANTLR -- I've been toying with
    GW> this idea for, ummm, too damn long now.  Years.

Yes (to both :).

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

    NS> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS
    NS> for CPython and ANTLR for JPython.

Unknown.  It would only make sense if the same grammar files could be
fed to each.  I have no idea whether that's true or not.  If not,
Greg's idea is worth researching.

-Barry



From loewis at informatik.hu-berlin.de  Mon Sep 25 17:36:24 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 25 Sep 2000 17:36:24 +0200 (MET DST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CF69D4.E3649C69@interet.com> (jim@interet.com)
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
		<39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com>
Message-ID: <200009251536.RAA26375@pandora.informatik.hu-berlin.de>

> Yes, but why not YACC?  Is Antlr so much better, or is
> YACC too primitive, or what?  IMHO, adding C++ just for
> parsing is not going to happen, so Antlr is not going to
> happen either.

I think the advantage that Barry saw is that ANTLR generates Java in
addition to C, so it could be used in JPython as well. In addition,
ANTLR is more advanced than YACC; it specifically supports full EBNF
as input, and has better mechanisms for conflict resolution.

On the YACC for Java side, Axel Schreiner has developed jay, see
http://www2.informatik.uni-osnabrueck.de/bernd/jay/staff/design/de/Artikel.htmld/
(if you read German, otherwise don't bother :-)

The main problem with multilanguage output is the semantic actions -
it would be quite a stunt to put semantic actions into the parser
which are valid both in C and Java :-) On that front, there is also
CUP (http://www.cs.princeton.edu/~appel/modern/java/CUP/), which has
different markup for Java actions ({: ... :}).

There is also BYACC/J, a patch to Berkeley Yacc to produce Java
(http://www.lincom-asg.com/~rjamison/byacc/).

Personally, I'm quite in favour of having the full parser source
(including parser generator if necessary) in the Python source
distribution. As a GCC contributor, I know what pain it is for users
that GCC requires bison to build - even though it is only required for
CVS builds, as distributions come with the generated files.

Regards,
Martin




From gward at mems-exchange.org  Mon Sep 25 18:22:35 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 12:22:35 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <14799.28156.687176.869540@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 25, 2000 at 11:23:40AM -0400
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <20000925104310.B1747@ludwig.cnri.reston.va.us> <14799.28156.687176.869540@anthem.concentric.net>
Message-ID: <20000925122235.A2167@ludwig.cnri.reston.va.us>

On 25 September 2000, Barry A. Warsaw said:
>     NS> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS
>     NS> for CPython and ANTLR for JPython.
> 
> Unknown.  It would only make sense if the same grammar files could be
> fed to each.  I have no idea whether that's true or not.  If not,
> Greg's idea is worth researching.

PCCTS 1.x grammar files tend to have lots of C code interwoven in them
-- at least for tricky, ill-defined grammars like BibTeX.  ;-)

ANTLR 2.x grammars certainly allow Java code to be woven into them; I
assume you can instead weave C++ or Sather if that's your preference.
Obviously, this would be one problem with having a common grammar for
JPython and CPython.

        Greg



From mal at lemburg.com  Mon Sep 25 18:39:22 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 18:39:22 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us>
Message-ID: <39CF7FBA.A54C40D@lemburg.com>

Greg Ward wrote:
> 
> On 20 September 2000, M.-A. Lemburg said:
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries
> > to continue as far as possible (much like make -k) ?
> >
> > If yes, could the existing Python parser/compiler be reused for
> > such a tool ?
> 
> >From what I understand of Python's parser and parser generator, no.
> Recovering from errors is indeed highly non-trivial.  If you're really
> interested, I'd look into Terence Parr's ANTLR -- it's a very fancy
> parser generator that's waaay ahead of pgen (or lex/yacc, for that
> matter).  ANTLR 2.x is highly Java-centric, and AFAIK doesn't yet have a
> C backend (grumble) -- just C++ and Java.  (Oh wait, the antlr.org web
> site says it can generate Sather too -- now there's an important
> mainstream language!  ;-)

Thanks, I'll have a look.
 
> Tech notes: like pgen, ANTLR is LL; it generates a recursive-descent
> parser.  Unlike pgen, ANTLR is LL(k) -- it can support arbitrary
> lookahead, although k>2 can make parser generation expensive (not
> parsing itself, just turning your grammar into code), as well as make
> your language harder to understand.  (I have a theory that pgen's k=1
> limitation has been a brick wall in the way of making Python's syntax
> more complex, i.e. it's a *feature*!)
> 
> More importantly, ANTLR has good support for error recovery.  My BibTeX
> parser has a lot of fun recovering from syntax errors, and (with a
> little smoke 'n mirrors magic in the lexing stage) does a pretty good
> job of it.  But you're right, it's *not* trivial to get this stuff
> right.  And without support from the parser generator, I suspect you
> would be in a world of hurtin'.

I was actually thinking of extracting the Python tokenizer and
parser from the Python source and tweaking it until it did
what I wanted it to do, ie. not generate valid code but produce
valid error messages ;-)

Now from the feedback I got it seems that this is not the
right approach. I'm not even sure whether using a parser
at all is the right way... I may have to stick to a fairly
general tokenizer and then try to solve the problem in chunks
of code (much like what Guido hinted at in his reply), possibly
even by doing trial and error using the Python builtin compiler
on these chunks.

Oh well,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Mon Sep 25 19:04:18 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 13:04:18 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <200009251700.KAA27700@slayer.i.sourceforge.net>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
Message-ID: <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > fix bug #114290: when interpreter's argv[0] has a relative path make
 >     it absolute by joining it with getcwd result.  avoid including
 >     unnecessary ./ in path but do not test for ../ (more complicated)
...
 > +     else if (argv0_path[0] == '.') {
 > + 	getcwd(path, MAXPATHLEN);
 > + 	if (argv0_path[1] == '/') 
 > + 	    joinpath(path, argv0_path + 2);

  Did you test this when argv[0] is something like './/foo/bin/python'?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Mon Sep 25 19:18:21 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 19:18:21 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com>
Message-ID: <016e01c02714$f945bc20$766940d5@hagrid>

in response to a OS X compiler problem, mal wrote:
> You could try to enable the macro at the top of unicodectype.c:
>  
> #if defined(macintosh) || defined(MS_WIN64)
> /*XXX This was required to avoid a compiler error for an early Win64
>  * cross-compiler that was used for the port to Win64. When the platform is
>  * released the MS_WIN64 inclusion here should no longer be necessary.
>  */
> /* This probably needs to be defined for some other compilers too. It breaks the
> ** 5000-label switch statement up into switches with around 1000 cases each.
> */
> #define BREAK_SWITCH_UP return 1; } switch (ch) {
> #else
> #define BREAK_SWITCH_UP /* nothing */
> #endif
> 
> If it does compile with the work-around enabled, please
> give us a set of defines which identify the compiler and
> platform so we can enable it per default for your setup.

I have a 500k "negative patch" sitting on my machine which removes
most of unicodectype.c, replacing it with a small data table (based on
the same unidb work as yesterdays unicodedatabase patch).

out
</F>

# dump all known unicode data

import unicodedata

for i in range(65536):
    char = unichr(i)
    data = (
        # ctype predicates
        char.isalnum(),
        char.isalpha(),
        char.isdecimal(),
        char.isdigit(),
        char.islower(),
        char.isnumeric(),
        char.isspace(),
        char.istitle(),
        char.isupper(),
        # ctype mappings
        char.lower(),
        char.upper(),
        char.title(),
        # properties
        unicodedata.digit(char, None),
        unicodedata.numeric(char, None),
        unicodedata.decimal(char, None),
        unicodedata.category(char),
        unicodedata.bidirectional(char),
        unicodedata.decomposition(char),
        unicodedata.mirrored(char),
        unicodedata.combining(char)
        )





From effbot at telia.com  Mon Sep 25 19:27:19 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 19:27:19 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid>
Message-ID: <017801c02715$ebcc38c0$766940d5@hagrid>

oops.  mailer problem; here's the rest of the mail:

> I have a 500k "negative patch" sitting on my machine which removes
> most of unicodectype.c, replacing it with a small data table (based on
> the same unidb work as yesterdays unicodedatabase patch).

(this shaves another another 400-500k off the source distribution,
and 10-20k in the binaries...)

I've verified that all ctype-related methods eturn the same result
as before the patch, for all characters in the unicode set (see the
attached script).

should I check it in?

</F>




From mal at lemburg.com  Mon Sep 25 19:46:21 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 19:46:21 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python 
 Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid>
Message-ID: <39CF8F6D.3F32C8FD@lemburg.com>

Fredrik Lundh wrote:
> 
> oops.  mailer problem; here's the rest of the mail:
> 
> > I have a 500k "negative patch" sitting on my machine which removes
> > most of unicodectype.c, replacing it with a small data table (based on
> > the same unidb work as yesterdays unicodedatabase patch).
> 
> (this shaves another another 400-500k off the source distribution,
> and 10-20k in the binaries...)
> 
> I've verified that all ctype-related methods eturn the same result
> as before the patch, for all characters in the unicode set (see the
> attached script).
> 
> should I check it in?

Any chance of taking a look at it first ? (BTW, what happened to the
usual post to SF, review, then checkin cycle ?)

The C type checks are a little performance sensitive since they
are used on a char by char basis in the C implementation of
.upper(), etc. -- do the new methods give the same performance ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Mon Sep 25 19:55:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 13:55:49 -0400
Subject: [Python-Dev] last second patches (was: regarding the Python  Developer posting...)
In-Reply-To: <39CF8F6D.3F32C8FD@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEOKHHAA.tim_one@email.msn.com>

[M.-A. Lemburg, on /F's Unicode patches]
> Any chance of taking a look at it first ? (BTW, what happened to the
> usual post to SF, review, then checkin cycle ?)

I encouraged /F *not* to submit a patch for the unicodedatabase.c change.
He knows what he's doing, experts in an area are allowed (see PEP200) to
skip the patch business, and we're trying to make quick progress before
2.0b2 ships.

This change may be more controversial, though:

> The C type checks are a little performance sensitive since they
> are used on a char by char basis in the C implementation of
> .upper(), etc. -- do the new methods give the same performance ?

Don't know.  Although it's hard to imagine we have any Unicode apps out
there now that will notice one way or the other <wink>.





From effbot at telia.com  Mon Sep 25 20:08:22 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 20:08:22 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python  Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>
Message-ID: <003601c0271c$1b814c80$766940d5@hagrid>

mal wrote:
> Any chance of taking a look at it first ?

same as unicodedatabase.c, just other data.

> (BTW, what happened to the usual post to SF, review, then
> checkin cycle ?)

two problems: SF cannot handle patches larger than 500k.
and we're in ship mode...

> The C type checks are a little performance sensitive since they
> are used on a char by char basis in the C implementation of
> .upper(), etc. -- do the new methods give the same performance ?

well, they're about 40% faster on my box.  ymmv, of course.

</F>




From gward at mems-exchange.org  Mon Sep 25 20:05:12 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 14:05:12 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009251536.RAA26375@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Mon, Sep 25, 2000 at 05:36:24PM +0200
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
Message-ID: <20000925140511.A2319@ludwig.cnri.reston.va.us>

On 25 September 2000, Martin von Loewis said:
> Personally, I'm quite in favour of having the full parser source
> (including parser generator if necessary) in the Python source
> distribution. As a GCC contributor, I know what pain it is for users
> that GCC requires bison to build - even though it is only required for
> CVS builds, as distributions come with the generated files.

This would be a strike against ANTLR, since it's written in Java -- and
therefore is about as portable as a church.  ;-(

It should be possible to generate good, solid, portable C code... but
AFAIK no one has done so to date with ANTLR 2.x.

        Greg



From jeremy at beopen.com  Mon Sep 25 20:11:12 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 14:11:12 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
	<14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
Message-ID: <14799.38208.987507.250305@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Did you test this when argv[0] is something like
  FLD> './/foo/bin/python'? 

No.  Two questions: What would that mean? How could I generate it?

Jeremy





From fdrake at beopen.com  Mon Sep 25 20:07:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 14:07:00 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.38208.987507.250305@bitdiddle.concentric.net>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
	<14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
	<14799.38208.987507.250305@bitdiddle.concentric.net>
Message-ID: <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 >   FLD> Did you test this when argv[0] is something like
 >   FLD> './/foo/bin/python'? 
 > 
 > No.  Two questions: What would that mean? How could I generate it?

  That should mean the same as './foo/bin/python' since multiple '/'
are equivalent to a single '/' on Unix.  (Same for r'\' on Windows
since this won't interfere with UNC paths (like '\\host\foo\bin...')).
  You can do this using fork/exec.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Mon Sep 25 20:20:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 14:20:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
	<14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
	<14799.38208.987507.250305@bitdiddle.concentric.net>
	<14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
Message-ID: <14799.38756.174565.664691@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Jeremy Hylton writes: Did you test this when argv[0] is
  FLD> something like './/foo/bin/python'?
  >>
  >> No.  Two questions: What would that mean? How could I generate
  >> it?

  FLD>   That should mean the same as './foo/bin/python' since
  FLD>   multiple '/' are equivalent to a single '/' on Unix.

Ok.  Tested with os.execv and it works correctly.

Did you see my query (in private email) about 1) whether it works on
Windows and 2) whether I should worry about platforms that don't have
a valid getcwd?

Jeremy





From effbot at telia.com  Mon Sep 25 20:26:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 20:26:16 +0200
Subject: [Python-Dev] CVS problems
References: <200009251700.KAA27700@slayer.i.sourceforge.net><14799.34194.855026.395907@cj42289-a.reston1.va.home.com><14799.38208.987507.250305@bitdiddle.concentric.net> <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
Message-ID: <006c01c0271e$1a72b0c0$766940d5@hagrid>

> cvs add Objects\unicodetype_db.h
cvs server: scheduling file `Objects/unicodetype_db.h' for addition
cvs server: use 'cvs commit' to add this file permanently

> cvs commit Objects\unicodetype_db.h
cvs server: [11:05:10] waiting for anoncvs_python's lock in /cvsroot/python/python/dist/src/Objects

yet another stale lock?  if so, what happened?  and more
importantly, how do I get rid of it?

</F>




From thomas at xs4all.net  Mon Sep 25 20:23:22 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 20:23:22 +0200
Subject: [Python-Dev] CVS problems
In-Reply-To: <006c01c0271e$1a72b0c0$766940d5@hagrid>; from effbot@telia.com on Mon, Sep 25, 2000 at 08:26:16PM +0200
References: <200009251700.KAA27700@slayer.i.sourceforge.net><14799.34194.855026.395907@cj42289-a.reston1.va.home.com><14799.38208.987507.250305@bitdiddle.concentric.net> <14799.37956.408416.190160@cj42289-a.reston1.va.home.com> <006c01c0271e$1a72b0c0$766940d5@hagrid>
Message-ID: <20000925202322.I20757@xs4all.nl>

On Mon, Sep 25, 2000 at 08:26:16PM +0200, Fredrik Lundh wrote:
> > cvs add Objects\unicodetype_db.h
> cvs server: scheduling file `Objects/unicodetype_db.h' for addition
> cvs server: use 'cvs commit' to add this file permanently
> 
> > cvs commit Objects\unicodetype_db.h
> cvs server: [11:05:10] waiting for anoncvs_python's lock in /cvsroot/python/python/dist/src/Objects
> 
> yet another stale lock?  if so, what happened?  and more
> importantly, how do I get rid of it?

This might not be a stale lock. Because it's anoncvs's lock, it can't be a
write lock. I've seen this before (mostly on checking out) and it does take
quite a bit for the CVS process to continue :P But in my cases, eventually
it did. If it stays longer than, say, 30m, it's probably
SF-bug-reporting-time again :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Mon Sep 25 20:24:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 14:24:25 -0400
Subject: [Python-Dev] CVS problems
In-Reply-To: <006c01c0271e$1a72b0c0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>

[Fredrik Lundh]
> > cvs add Objects\unicodetype_db.h
> cvs server: scheduling file `Objects/unicodetype_db.h' for addition
> cvs server: use 'cvs commit' to add this file permanently
>
> > cvs commit Objects\unicodetype_db.h
> cvs server: [11:05:10] waiting for anoncvs_python's lock in
> /cvsroot/python/python/dist/src/Objects
>
> yet another stale lock?  if so, what happened?  and more
> importantly, how do I get rid of it?

I expect this one goes away by itself -- anoncvs can't be doing a commit,
and I don't believe we've ever seen a stale lock from anoncvs.  Probably
just some fan doing their first read-only checkout over a slow line.  BTW, I
just did a full update & didn't get any lock msgs.  Try again!





From effbot at telia.com  Mon Sep 25 21:04:26 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 21:04:26 +0200
Subject: [Python-Dev] CVS problems
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
Message-ID: <00bc01c02723$6f8faf40$766940d5@hagrid>

tim wrote:> > > cvs commit Objects\unicodetype_db.h
> > cvs server: [11:05:10] waiting for anoncvs_python's lock in
> > /cvsroot/python/python/dist/src/Objects
> >
> I expect this one goes away by itself -- anoncvs can't be doing a commit,
> and I don't believe we've ever seen a stale lock from anoncvs.  Probably
> just some fan doing their first read-only checkout over a slow line.

I can update alright, but I still get this message when I try
to commit stuff.  this message, or timeouts from the server.

annoying...

</F>




From guido at beopen.com  Mon Sep 25 22:21:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 15:21:11 -0500
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
In-Reply-To: Your message of "Mon, 25 Sep 2000 20:08:22 +0200."
             <003601c0271c$1b814c80$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>  
            <003601c0271c$1b814c80$766940d5@hagrid> 
Message-ID: <200009252021.PAA20146@cj20424-a.reston1.va.home.com>

> mal wrote:
> > Any chance of taking a look at it first ?
> 
> same as unicodedatabase.c, just other data.
> 
> > (BTW, what happened to the usual post to SF, review, then
> > checkin cycle ?)
> 
> two problems: SF cannot handle patches larger than 500k.
> and we're in ship mode...
> 
> > The C type checks are a little performance sensitive since they
> > are used on a char by char basis in the C implementation of
> > .upper(), etc. -- do the new methods give the same performance ?
> 
> well, they're about 40% faster on my box.  ymmv, of course.

Fredrik, why don't you make your patch available for review by
Marc-Andre -- after all he "owns" this code (is the original author).
If Marc-Andre agrees, and Jeremy has enough time to finish the release
on time, I have no problem with checking it in.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Mon Sep 25 22:02:25 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 16:02:25 -0400 (EDT)
Subject: [Python-Dev] CVS problems
In-Reply-To: <00bc01c02723$6f8faf40$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
	<00bc01c02723$6f8faf40$766940d5@hagrid>
Message-ID: <14799.44881.753935.662313@bitdiddle.concentric.net>

>>>>> "FL" == Fredrik Lundh <effbot at telia.com> writes:

  FL>> cvs commit Objects\unicodetype_db.h
  >> > cvs server: [11:05:10] waiting for anoncvs_python's lock in
  >> > /cvsroot/python/python/dist/src/Objects
  >> >
  [tim wrote:]
  >> I expect this one goes away by itself -- anoncvs can't be doing a
  >> commit, and I don't believe we've ever seen a stale lock from
  >> anoncvs.  Probably just some fan doing their first read-only
  >> checkout over a slow line.

  FL> I can update alright, but I still get this message when I try to
  FL> commit stuff.  this message, or timeouts from the server.

  FL> annoying...

It's still there now, about an hour later.  I can't even tag the tree
with the r20b2 marker, of course.

How do we submit an SF admin request?

Jeremy



From effbot at telia.com  Mon Sep 25 22:31:06 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 22:31:06 +0200
Subject: [Python-Dev] CVS problems
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com><00bc01c02723$6f8faf40$766940d5@hagrid> <14799.44881.753935.662313@bitdiddle.concentric.net>
Message-ID: <006901c0272f$ce106120$766940d5@hagrid>

jeremy wrote:

> It's still there now, about an hour later.  I can't even tag the tree
> with the r20b2 marker, of course.
> 
> How do we submit an SF admin request?

I've already submitted a support request.  not that anyone
seems to be reading them, though -- the oldest unassigned
request is from September 19th...

anyone knows anyone at sourceforge?

</F>




From effbot at telia.com  Mon Sep 25 22:49:47 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 22:49:47 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>              <003601c0271c$1b814c80$766940d5@hagrid>  <200009252021.PAA20146@cj20424-a.reston1.va.home.com>
Message-ID: <008101c02732$29fbf4c0$766940d5@hagrid>

> Fredrik, why don't you make your patch available for review by
> Marc-Andre -- after all he "owns" this code (is the original author).

hey, *I* wrote the original string type, didn't I? ;-)

anyway, the new unicodectype.c file is here:
http://sourceforge.net/patch/download.php?id=101652

(the patch is 500k, the new file 14k)

the new data file is here:
http://sourceforge.net/patch/download.php?id=101653

the new generator script is already in the repository
(Tools/unicode/makeunicodedata.py)

</F>




From fdrake at beopen.com  Mon Sep 25 22:39:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 16:39:35 -0400 (EDT)
Subject: [Python-Dev] CVS problems
In-Reply-To: <006901c0272f$ce106120$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
	<00bc01c02723$6f8faf40$766940d5@hagrid>
	<14799.44881.753935.662313@bitdiddle.concentric.net>
	<006901c0272f$ce106120$766940d5@hagrid>
Message-ID: <14799.47111.674769.204798@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > anyone knows anyone at sourceforge?

  I'll send an email.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jim at interet.com  Mon Sep 25 22:48:28 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 16:48:28 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
			<39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
Message-ID: <39CFBA1C.3E05B760@interet.com>

Martin von Loewis wrote:
> 
>> Yes, but why not YACC?  Is Antlr so much better, or is

> I think the advantage that Barry saw is that ANTLR generates Java in
> addition to C, so it could be used in JPython as well. In addition,
> ANTLR is more advanced than YACC; it specifically supports full EBNF
> as input, and has better mechanisms for conflict resolution.

Oh, OK.  Thanks.
 
> Personally, I'm quite in favour of having the full parser source
> (including parser generator if necessary) in the Python source
> distribution. As a GCC contributor, I know what pain it is for users
> that GCC requires bison to build - even though it is only required for
> CVS builds, as distributions come with the generated files.

I see your point, but the practical solution that we can
do today is to use YACC, bison, and distribute the generated
parser files.

Jim



From jeremy at beopen.com  Mon Sep 25 23:14:02 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 17:14:02 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CFBA1C.3E05B760@interet.com>
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
	<14799.24252.537090.326130@anthem.concentric.net>
	<39CF69D4.E3649C69@interet.com>
	<200009251536.RAA26375@pandora.informatik.hu-berlin.de>
	<39CFBA1C.3E05B760@interet.com>
Message-ID: <14799.49178.2354.77727@bitdiddle.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

  >> Personally, I'm quite in favour of having the full parser source
  >> (including parser generator if necessary) in the Python source
  >> distribution. As a GCC contributor, I know what pain it is for
  >> users that GCC requires bison to build - even though it is only
  >> required for CVS builds, as distributions come with the generated
  >> files.

  JCA> I see your point, but the practical solution that we can do
  JCA> today is to use YACC, bison, and distribute the generated
  JCA> parser files.

I don't understand what problem this is a practical solution to.
This thread started with MAL's questions about finding errors in
Python code.  You mentioned an effort to write a lint-like tool.
It may be that YACC has great support for error recovery, in which
case MAL might want to look at for his tool.

But in general, the most practical solution for parsing Python is
probably to use the Python parser and the builtin parser module.  It
already exists and seems to work just fine.

Jeremy



From thomas at xs4all.net  Mon Sep 25 23:27:01 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 23:27:01 +0200
Subject: [Python-Dev] CVS problems
In-Reply-To: <006901c0272f$ce106120$766940d5@hagrid>; from effbot@telia.com on Mon, Sep 25, 2000 at 10:31:06PM +0200
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com><00bc01c02723$6f8faf40$766940d5@hagrid> <14799.44881.753935.662313@bitdiddle.concentric.net> <006901c0272f$ce106120$766940d5@hagrid>
Message-ID: <20000925232701.J20757@xs4all.nl>

On Mon, Sep 25, 2000 at 10:31:06PM +0200, Fredrik Lundh wrote:
> jeremy wrote:

> > It's still there now, about an hour later.  I can't even tag the tree
> > with the r20b2 marker, of course.
> > 
> > How do we submit an SF admin request?
> 
> I've already submitted a support request.  not that anyone
> seems to be reading them, though -- the oldest unassigned
> request is from September 19th...

> anyone knows anyone at sourceforge?

I've had good results mailing 'staff at sourceforge.net' -- but only in real
emergencies (one of the servers was down, at the time.) That isn't to say
you or someone else shouldn't use it now (it's delaying the beta, after all,
which is kind of an emergency) but I just can't say how fast they'll respond
to such a request :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Mon Sep 25 23:33:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 17:33:27 -0400
Subject: [Python-Dev] CVS problems
In-Reply-To: <20000925232701.J20757@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPPHHAA.tim_one@email.msn.com>

The CVS problem has been fixed.





From mal at lemburg.com  Tue Sep 26 00:35:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 26 Sep 2000 00:35:34 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python  
 Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com> <003601c0271c$1b814c80$766940d5@hagrid>
Message-ID: <39CFD336.C5B6DB4D@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > The C type checks are a little performance sensitive since they
> > are used on a char by char basis in the C implementation of
> > .upper(), etc. -- do the new methods give the same performance ?
> 
> well, they're about 40% faster on my box.  ymmv, of course.

Hmm, I get a 1% performance downgrade on Linux using pgcc, but
in the end its a win anyways :-)

What remains are the nits I posted to SF.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Tue Sep 26 03:44:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 20:44:58 -0500
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: Your message of "Mon, 25 Sep 2000 17:14:02 -0400."
             <14799.49178.2354.77727@bitdiddle.concentric.net> 
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de> <39CFBA1C.3E05B760@interet.com>  
            <14799.49178.2354.77727@bitdiddle.concentric.net> 
Message-ID: <200009260144.UAA25752@cj20424-a.reston1.va.home.com>

> I don't understand what problem this is a practical solution to.
> This thread started with MAL's questions about finding errors in
> Python code.  You mentioned an effort to write a lint-like tool.
> It may be that YACC has great support for error recovery, in which
> case MAL might want to look at for his tool.
> 
> But in general, the most practical solution for parsing Python is
> probably to use the Python parser and the builtin parser module.  It
> already exists and seems to work just fine.

Probably not that relevant any more, but MAL originally asked for a
parser that doesn't stop at the first error.  That's a real weakness
of the existing parser!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From greg at cosc.canterbury.ac.nz  Tue Sep 26 03:13:19 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 26 Sep 2000 13:13:19 +1200 (NZST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009260144.UAA25752@cj20424-a.reston1.va.home.com>
Message-ID: <200009260113.NAA23556@s454.cosc.canterbury.ac.nz>

Guido:

> MAL originally asked for a
> parser that doesn't stop at the first error.  That's a real weakness
> of the existing parser!!!

Is it really worth putting a lot of effort into this?
In my experience, the vast majority of errors I get from
Python are run-time errors, not parse errors.

(If you could find multiple run-time errors in one go,
*that* would be an impressive trick!)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From mwh21 at cam.ac.uk  Tue Sep 26 14:15:26 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Tue, 26 Sep 2000 13:15:26 +0100 (BST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009260113.NAA23556@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.SOL.4.21.0009261309240.22922-100000@yellow.csi.cam.ac.uk>

On Tue, 26 Sep 2000, Greg Ewing wrote:

> Guido:
> 
> > MAL originally asked for a
> > parser that doesn't stop at the first error.  That's a real weakness
> > of the existing parser!!!
> 
> Is it really worth putting a lot of effort into this?

It might be if you were trying to develop an IDE that could syntactically
analyse what the user was typing even if he/she had left a half finished
expression further up in the buffer (I'd kind of assumed this was the
goal).  So you're not continuing after errors, exactly, more like
unfinishednesses (or some better word...).

I guess one approach to this would be to divided up the buffer according
to indentation and then parse each block as delimited by the indentation
individually.

Two random points:

1) Triple-quoted strings are going to be a problem.
2) Has anyone gotten flex to tokenize Python?  I was looking at the manual
   yesterday and it didn't look impossible, although a bit tricky.

Cheers,
M.




From jim at interet.com  Tue Sep 26 15:23:47 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Tue, 26 Sep 2000 09:23:47 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
		<39CF596C.17BA4DC5@interet.com>
		<14799.24252.537090.326130@anthem.concentric.net>
		<39CF69D4.E3649C69@interet.com>
		<200009251536.RAA26375@pandora.informatik.hu-berlin.de>
		<39CFBA1C.3E05B760@interet.com> <14799.49178.2354.77727@bitdiddle.concentric.net>
Message-ID: <39D0A363.2DE02593@interet.com>

Jeremy Hylton wrote:

> I don't understand what problem this is a practical solution to.

To recover from errors better by using YACC's built-in error
recovery features.  Maybe unifying the C and Java parsers.  I
admit I don't know how J-Python parses Python.

I kind of threw in my objection to tokenize.py which should be
combined with tokenizer.c.  Of course it is work which only
results in the same operation as before, but reduces the code
base.  Not a popular project.

> But in general, the most practical solution for parsing Python is
> probably to use the Python parser and the builtin parser module.  It
> already exists and seems to work just fine.

A very good point.  I am not 100% sure it is worth it.  But I
found the current parser unworkable for my project.

JimA



From bwarsaw at beopen.com  Tue Sep 26 16:43:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 26 Sep 2000 10:43:24 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
	<14799.24252.537090.326130@anthem.concentric.net>
	<39CF69D4.E3649C69@interet.com>
	<200009251536.RAA26375@pandora.informatik.hu-berlin.de>
	<39CFBA1C.3E05B760@interet.com>
	<14799.49178.2354.77727@bitdiddle.concentric.net>
	<39D0A363.2DE02593@interet.com>
Message-ID: <14800.46604.587756.479012@anthem.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

    JCA> To recover from errors better by using YACC's built-in error
    JCA> recovery features.  Maybe unifying the C and Java parsers.  I
    JCA> admit I don't know how J-Python parses Python.

It uses JavaCC.

http://www.metamata.com/javacc/

-Barry



From thomas at xs4all.net  Tue Sep 26 20:20:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 26 Sep 2000 20:20:53 +0200
Subject: [Python-Dev] [OT] ApacheCon 2000
Message-ID: <20000926202053.K20757@xs4all.nl>

I'm (off-topicly) wondering if anyone here is going to the Apache Conference
in London, october 23-25, and how I'm going to recognize them (My PythonLabs
shirt will probably not last more than a day, and I don't have any other
python-related shirts ;) 

I'm also wondering if anyone knows a halfway-decent hotel somewhat near the
conference site (Olympia Conference Centre, Kensington). I have a
reservation at the Hilton, but it's bloody expensive and damned hard to deal
with, over the phone. I don't mind the price (boss pays) but I'd think
they'd not treat potential customers like village idiots ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at beopen.com  Tue Sep 26 21:01:27 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 15:01:27 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
Message-ID: <14800.62087.617722.272109@bitdiddle.concentric.net>

We have tar balls and RPMs available on our private FTP site,
python.beopen.com.  If you have a chance to test these on your
platform in the next couple of hours, feedback would be appreciated.
We've tested on FreeBSD and RH and Mandrake Linux.

What we're most interested in hearing about is whether it builds
cleanly and runs the regression test.

The actual release will occur later today from pythonlabs.com.

Jeremy



From fdrake at beopen.com  Tue Sep 26 21:43:42 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 15:43:42 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > We have tar balls and RPMs available on our private FTP site,
 > python.beopen.com.  If you have a chance to test these on your
 > platform in the next couple of hours, feedback would be appreciated.
 > We've tested on FreeBSD and RH and Mandrake Linux.

  I've just built & tested on Caldera 2.3 on the SourceForge compile
farm, and am getting some failures.  If anyone who knows Caldera can
figure these out, that would be great (I'll turn them into proper bug
reports later).
  The failing tests are for fcntl, openpty, and pty.  Here's the
output of regrtest -v for those tests:

bash$ ./python -tt ../Lib/test/regrtest.py -v test_{fcntl,openpty,pty}
test_fcntl
test_fcntl
Status from fnctl with O_NONBLOCK:  0
struct.pack:  '\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'test test_fcntl crashed -- exceptions.IOError: [Errno 37] No locks available
Traceback (most recent call last):
  File "../Lib/test/regrtest.py", line 235, in runtest
    __import__(test, globals(), locals(), [])
  File "../Lib/test/test_fcntl.py", line 31, in ?
    rv = fcntl.fcntl(f.fileno(), FCNTL.F_SETLKW, lockdata)
IOError: [Errno 37] No locks available
test_openpty
test_openpty
Calling os.openpty()
test test_openpty crashed -- exceptions.OSError: [Errno 2] No such file or directory
Traceback (most recent call last):
  File "../Lib/test/regrtest.py", line 235, in runtest
    __import__(test, globals(), locals(), [])
  File "../Lib/test/test_openpty.py", line 9, in ?
    master, slave = os.openpty()
OSError: [Errno 2] No such file or directory
test_pty
test_pty
Calling master_open()
Got master_fd '5', slave_name '/dev/ttyp0'
Calling slave_open('/dev/ttyp0')
test test_pty skipped --  Pseudo-terminals (seemingly) not functional.
2 tests failed: test_fcntl test_openpty
1 test skipped: test_pty


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 26 22:05:13 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:05:13 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <004901c027f5$1d743640$766940d5@hagrid>

jeremy wrote:

> We have tar balls and RPMs available on our private FTP site,
> python.beopen.com.  If you have a chance to test these on your
> platform in the next couple of hours, feedback would be appreciated.
> We've tested on FreeBSD and RH and Mandrake Linux.

is the windows installer up to date?

I just grabbed it, only to get a "corrupt installation detected" message
box (okay, I confess: I do have a PythonWare distro installed, but may-
be you could use a slightly more polite message? ;-)

</F>




From tim_one at email.msn.com  Tue Sep 26 21:59:34 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 15:59:34 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDGHIAA.tim_one@email.msn.com>

[Jeremy Hylton]
> We have tar balls and RPMs available on our private FTP site,
> python.beopen.com.

I think he meant to add under /pub/tmp/.  In any case, that's where the
2.0b2 Windows installer is now:

    BeOpen-Python-2.0b2.exe
    5,667,334 bytes
    SHA digest:  4ec69734d9931f5b83b391b2a9606c2d4e793428

> If you have a chance to test these on your platform in the next
> couple of hours, feedback would be appreciated.  We've tested on
> FreeBSD and RH and Mandrake Linux.

Would also be cool if at least one person other than me tried the Windows
installer.  I usually pick on Guido for this (just as he used to pick on
me), but, alas, he's somewhere in transit mid-continent.

executives!-ly y'rs  - tim





From jeremy at beopen.com  Tue Sep 26 22:05:44 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 16:05:44 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <004901c027f5$1d743640$766940d5@hagrid>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <14801.408.372215.493355@bitdiddle.concentric.net>

>>>>> "FL" == Fredrik Lundh <effbot at telia.com> writes:

  FL> jeremy wrote:
  >> We have tar balls and RPMs available on our private FTP site,
  >> python.beopen.com.  If you have a chance to test these on your
  >> platform in the next couple of hours, feedback would be
  >> appreciated.  We've tested on FreeBSD and RH and Mandrake Linux.

  FL> is the windows installer up to date?

No.  Tim has not done the Windows installer yet.  It's coming...

  FL> I just grabbed it, only to get a "corrupt installation detected"
  FL> message box (okay, I confess: I do have a PythonWare distro
  FL> installed, but may- be you could use a slightly more polite
  FL> message? ;-)

Did you grab the 2.0b1 exe?  I would not be surprised if the one in
/pub/tmp did not work.  It's probably an old pre-release version of
the beta 1 Windows installer.

Jeremy





From tim_one at email.msn.com  Tue Sep 26 22:01:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:01:23 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDHHIAA.tim_one@email.msn.com>

[/F]
> is the windows installer up to date?
>
> I just grabbed it, only to get a "corrupt installation detected" message
> box (okay, I confess: I do have a PythonWare distro installed, but may-
> be you could use a slightly more polite message? ;-)

I'm pretty sure you grabbed it while the scp from my machine was still in
progress.  Try it again!  While BeOpen.com has no official policy toward
PythonWare, I think it's cool.





From tim_one at email.msn.com  Tue Sep 26 22:02:48 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:02:48 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.408.372215.493355@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEDIHIAA.tim_one@email.msn.com>

All the Windows installers under /pub/tmp/ should work fine.  Although only
2.0b2 should be of any interest to anyone anymore.





From fdrake at beopen.com  Tue Sep 26 22:05:19 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 16:05:19 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
Message-ID: <14801.383.799094.8428@cj42289-a.reston1.va.home.com>

Fred L. Drake, Jr. writes:
 >   I've just built & tested on Caldera 2.3 on the SourceForge compile
 > farm, and am getting some failures.  If anyone who knows Caldera can
 > figure these out, that would be great (I'll turn them into proper bug
 > reports later).
 >   The failing tests are for fcntl, openpty, and pty.  Here's the
 > output of regrtest -v for those tests:

  These same tests fail in what appears to be the same way on SuSE 6.3
(using the SourceForge compile farm).  Does anyone know the vagaries
of Linux libc versions enough to tell if this is a libc5/glibc6
difference?  Or a difference in kernel versions?
  On to Slackware...


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 26 22:08:09 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:08:09 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <000001c027f7$e0915480$766940d5@hagrid>

I wrote:
> I just grabbed it, only to get a "corrupt installation detected" message
> box (okay, I confess: I do have a PythonWare distro installed, but may-
> be you could use a slightly more polite message? ;-)

nevermind; the size of the file keeps changing on the site, so
I guess someone's uploading it (over and over again?)

</F>




From nascheme at enme.ucalgary.ca  Tue Sep 26 22:16:10 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Tue, 26 Sep 2000 14:16:10 -0600
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.383.799094.8428@cj42289-a.reston1.va.home.com>; from Fred L. Drake, Jr. on Tue, Sep 26, 2000 at 04:05:19PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com> <14801.383.799094.8428@cj42289-a.reston1.va.home.com>
Message-ID: <20000926141610.A6557@keymaster.enme.ucalgary.ca>

On Tue, Sep 26, 2000 at 04:05:19PM -0400, Fred L. Drake, Jr. wrote:
>   These same tests fail in what appears to be the same way on SuSE 6.3
> (using the SourceForge compile farm).  Does anyone know the vagaries
> of Linux libc versions enough to tell if this is a libc5/glibc6
> difference?  Or a difference in kernel versions?

I don't know much but having the output from "uname -a" and "ldd python"
could be helpful (ie. which kernel and which libc).

  Neil



From tim_one at email.msn.com  Tue Sep 26 22:17:52 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:17:52 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <000001c027f7$e0915480$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDJHIAA.tim_one@email.msn.com>

> nevermind; the size of the file keeps changing on the site, so
> I guess someone's uploading it (over and over again?)

No, I uploaded it exactly once, but it took over an hour to complete
uploading.  That's done now.  If it *still* fails for you, then gripe.  You
simply jumped the gun by grabbing it before anyone said it was ready.





From fdrake at beopen.com  Tue Sep 26 22:32:21 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 16:32:21 -0400 (EDT)
Subject: [Python-Dev] 2.0b2 on Slackware 7.0
Message-ID: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>

  I just built and tested 2.0b2 on Slackware 7.0, and found that
threads failed miserably.  I got the message:

pthread_cond_wait: Interrupted system call

over & over (*hundreds* of times before I killed it) during one of the
tests (test_fork1.py? it scrolled out of the scollback buffer, 2000
lines).  If I configure it --without-threads it works great.  Unless
you need threads.

uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001c000)
	libdl.so.2 => /lib/libdl.so.2 (0x40056000)
	libutil.so.1 => /lib/libutil.so.1 (0x4005a000)
	libm.so.6 => /lib/libm.so.6 (0x4005d000)
	libc.so.6 => /lib/libc.so.6 (0x4007a000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

  If anyone has any ideas, please send them along!  I'll turn this
into a real bug report later.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 26 22:48:49 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:48:49 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid> <000001c027f7$e0915480$766940d5@hagrid>
Message-ID: <005901c027fb$2ecf8380$766940d5@hagrid>

> nevermind; the size of the file keeps changing on the site, so
> I guess someone's uploading it (over and over again?)

heh.  just discovered that my ISP has introduced a new
policy: if you send stupid messages, we'll knock you off
the net for 30 minutes...

anyway, I've now downloaded the installer, and it works
pretty well...

:::

just one weird thing:

according to dir, I have 41 megs on my C: disk before
running the installer...

according to the installer, I have 22.3 megs, but Python
only requires 18.3 megs, so it should be okay...

but a little later, the installer claims that it needs an
additional 21.8 megs free space...  if I click ignore, the
installer proceeds (but boy, is it slow or what? ;-)

after installation (but before reboot) (reboot!?), I have
19.5 megs free.

hmm...

after uninstalling, I have 40.7 megs free.  there's still
some crud in the Python20\Tools\idle directory.

after removing that stuff, I have 40.8 megs free.

close enough ;-)

on a second run, it claims that I have 21.3 megs free, and
that the installer needs another 22.8 megs to complete in-
stallation.

:::

without rebooting, IDLE refuses to start, but the console
window works fine...

</F>




From martin at loewis.home.cs.tu-berlin.de  Tue Sep 26 22:34:41 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 26 Sep 2000 22:34:41 +0200
Subject: [Python-Dev] Bogus SAX test case
Message-ID: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>

test_sax.py has the test case test_xmlgen_ns, which reads

ns_uri = "http://www.python.org/xml-ns/saxtest/"

    gen.startDocument()
    gen.startPrefixMapping("ns1", ns_uri)
    gen.startElementNS((ns_uri, "doc"), "ns:doc", {})
    gen.endElementNS((ns_uri, "doc"), "ns:doc")
    gen.endPrefixMapping("ns1")
    gen.endDocument()

Translating that to XML, it should look like

<?xml version="1.0" encoding="iso-8859-1"?>
<ns:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"><ns:doc/>

(or, alternatively, the element could just be empty). Is that the XML
that would produce above sequence of SAX events?

It seems to me that this XML is ill-formed, the namespace prefix ns is
not defined here. Is that analysis correct? Furthermore, the test
checks whether the generator produces

<?xml version="1.0" encoding="iso-8859-1"?>
<ns1:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"></ns1:doc>

It appears that the expected output is bogus; I'd rather expect to get
the original document back.

I noticed this because in PyXML, XMLGenerator *would* produce ns:doc
on output, so the test case broke. I have now changed PyXML to follow
Python 2.0b2 here.

My proposal would be to correct the test case to pass "ns1:doc" as the
qname, and to correct the generator to output the qname if that was
provided by the reader.

Comments?

Regards,
Martin



From effbot at telia.com  Tue Sep 26 22:57:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:57:11 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid> <000001c027f7$e0915480$766940d5@hagrid> <005901c027fb$2ecf8380$766940d5@hagrid>
Message-ID: <000a01c027fc$6942c800$766940d5@hagrid>

I wrote:
> without rebooting, IDLE refuses to start, but the console
> window works fine...

fwiw, rebooting didn't help.

</F>




From thomas at xs4all.net  Tue Sep 26 22:51:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 26 Sep 2000 22:51:47 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 03:43:42PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
Message-ID: <20000926225146.L20757@xs4all.nl>

On Tue, Sep 26, 2000 at 03:43:42PM -0400, Fred L. Drake, Jr. wrote:

>   The failing tests are for fcntl, openpty, and pty.  Here's the
> output of regrtest -v for those tests:

> bash$ ./python -tt ../Lib/test/regrtest.py -v test_{fcntl,openpty,pty}
> test_fcntl
> test_fcntl
> Status from fnctl with O_NONBLOCK:  0
> struct.pack:  '\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'test test_fcntl crashed -- exceptions.IOError: [Errno 37] No locks available
> Traceback (most recent call last):
>   File "../Lib/test/regrtest.py", line 235, in runtest
>     __import__(test, globals(), locals(), [])
>   File "../Lib/test/test_fcntl.py", line 31, in ?
>     rv = fcntl.fcntl(f.fileno(), FCNTL.F_SETLKW, lockdata)
> IOError: [Errno 37] No locks available

Looks like your /tmp directory doesn't support locks. Perhaps it's some kind
of RAMdisk ? See if you can find a 'normal' filesystem (preferably not NFS)
where you have write-permission, and change the /tmp/delete-me path in
test_fcntl to that.

> test_openpty
> test_openpty
> Calling os.openpty()
> test test_openpty crashed -- exceptions.OSError: [Errno 2] No such file or directory
> Traceback (most recent call last):
>   File "../Lib/test/regrtest.py", line 235, in runtest
>     __import__(test, globals(), locals(), [])
>   File "../Lib/test/test_openpty.py", line 9, in ?
>     master, slave = os.openpty()
> OSError: [Errno 2] No such file or directory

If you're running glibc (which is pretty likely, because IIRC libc5 didn't
have an openpty() call, so test_openpty should be skipped) openpty() is
defined as a library routine that tries to open /dev/ptmx. That's the kernel
support for Unix98 pty's. However, it's possible that support is turned off
in the default Caldera kernel, or perhaps /dev/ptmx does not exist (what
kernel are you running, btw ?) /dev/ptmx was new in 2.1.x, so if you're
running 2.0 kernels, that might be the problem.

I'm not sure if you're supposed to get that error, though. I've never tested
glibc's openpty() support on a system that had it turned off, though I have
seen *almost* exactly the same error message from BSDI's openpty() call,
which works by sequentially trying to open each pty, until it finds one that
works. 

> test_pty
> test_pty
> Calling master_open()
> Got master_fd '5', slave_name '/dev/ttyp0'
> Calling slave_open('/dev/ttyp0')
> test test_pty skipped --  Pseudo-terminals (seemingly) not functional.
> 2 tests failed: test_fcntl test_openpty
> 1 test skipped: test_pty

The 'normal' procedure for opening pty's is to open the master, and if that
works, the pty is functional... But it looks like you could open the master,
but not the slave. Possibly permission problems, or a messed up /dev
directory. Do you know if /dev/ttyp0 was in use while you were running the
test ? (it's pretty likely it was, since it's usually the first pty on the
search list.) What might be happening here is that the master is openable,
for some reason, even if the pty/tty pair is already in use, but the slave
isn't openable. That would mean that the pty library is basically
nonfunctional, on those platforms, and it's definately not the behaviour
I've seen on other platforms :P And this wouldn't be a new thing, because
the pty module has always worked this way.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Tue Sep 26 22:56:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:56:33 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <005901c027fb$2ecf8380$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDLHIAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> just one weird thing:
>
> according to dir, I have 41 megs on my C: disk before
> running the installer...
>
> according to the installer, I have 22.3 megs,

This is the Wise "Check free disk space" "Script item".  Now you know as
much about it as I do <wink>.

> but Python only requires 18.3 megs, so it should be okay...

Noting that 22.3 + 18.3 ~= 41.  So it sounds like Wise's "Disk space
remaining" is trying to tell you how much space you'll have left *after* the
install.  Indeed, if you try unchecking various items in the "Select
Components" dialog, you should see that the "Disk space remaining" changes
accordingly.

> but a little later, the installer claims that it needs an
> additional 21.8 megs free space...  if I click ignore, the
> installer proceeds (but boy, is it slow or what? ;-)

Win95?  Which version?  The installer runs very quickly for me (Win98).
I've never tried it without plenty of free disk space, though; maybe it
needs temp space for unpacking?  Dunno.

> after installation (but before reboot) (reboot!?), I have
> 19.5 megs free.

It's unclear here whether the installer did or did not *say* it wanted you
to reboot.  It should ask for a reboot if and only if it needs to update an
MS shared DLL (the installer ships with MSVCRT.DLL and MSCVICRT.DLL).

> hmm...
>
> after uninstalling, I have 40.7 megs free.  there's still
> some crud in the Python20\Tools\idle directory.

Like what?  .pyc files, perhaps?  Like most uninstallers, it will not delete
files it didn't install, so all .pyc files (or anything else) generated
after the install won't be touched.

> after removing that stuff, I have 40.8 megs free.
>
> close enough ;-)
>
> on a second run, it claims that I have 21.3 megs free, and
> that the installer needs another 22.8 megs to complete in-
> stallation.

Noted.

> without rebooting, IDLE refuses to start, but the console
> window works fine...

If it told you to reboot and you didn't, I don't really care what happens if
you ignore the instructions <wink>.  Does IDLE start after you reboot?

thanks-for-the-pain!-ly y'rs  - tim





From tim_one at email.msn.com  Tue Sep 26 23:02:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 17:02:14 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <000a01c027fc$6942c800$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com>

[/F]
> I wrote:
> > without rebooting, IDLE refuses to start, but the console
> > window works fine...
>
> fwiw, rebooting didn't help.

So let's start playing bug report:  Which version of Windows?  By what means
did you attempt to start IDLE?  What does "refuses to start" mean (error
msg, system freeze, hourglass that never goes away, pops up & vanishes,
nothing visible happens at all, ...)?  Does Tkinter._test() work from a
DOS-box Python?  Do you have magical Tcl/Tk envars set for your own
development work?  Stuff like that.





From effbot at telia.com  Tue Sep 26 23:30:09 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 23:30:09 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com>
Message-ID: <001b01c02800$f3996000$766940d5@hagrid>

tim wrote,
> > fwiw, rebooting didn't help.

> So let's start playing bug report:

oh, I've figured it out (what did you expect ;-). read on.

> Which version of Windows?

Windows 95 OSR 2.

> By what means did you attempt to start IDLE?

> What does "refuses to start" mean (error msg, system freeze,
> hourglass that never goes away, pops up & vanishes, nothing
> visible happens at all, ...)?

idle never appears.

> Does Tkinter._test() work from a DOS-box Python?

yes -- but it hangs if I close it with the "x" button (same
problem as I've reported earlier).

> Do you have magical Tcl/Tk envars set for your own
> development work?

bingo!

(a global PYTHONPATH setting also resulted in some interesting
behaviour... on my wishlist for 2.1: an option telling Python to
ignore all PYTHON* environment variables...)

</F>




From tim_one at email.msn.com  Tue Sep 26 23:50:54 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 17:50:54 -0400
Subject: [Python-Dev] Crisis aversive
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>

I'm going to take a nap now.  If there's a Windows crisis for the duration,
mail pleas for urgent assistance to bwarsaw at beopen.com -- especially if it
involves interactions between a Python script running as an NT service and
python-mode.el under NT Emacs.  Barry *loves* those!

Back online in a few hours.

sometimes-when-you-hit-the-wall-you-stick-ly y'rs  - tim





From fdrake at beopen.com  Tue Sep 26 23:50:16 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 17:50:16 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <20000926141610.A6557@keymaster.enme.ucalgary.ca>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
	<14801.383.799094.8428@cj42289-a.reston1.va.home.com>
	<20000926141610.A6557@keymaster.enme.ucalgary.ca>
Message-ID: <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>

Neil Schemenauer writes:
 > I don't know much but having the output from "uname -a" and "ldd python"
 > could be helpful (ie. which kernel and which libc).

Under SuSE 6.3, uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001d000)
	libpthread.so.0 => /lib/libpthread.so.0 (0x4005c000)
	libdl.so.2 => /lib/libdl.so.2 (0x4006e000)
	libutil.so.1 => /lib/libutil.so.1 (0x40071000)
	libm.so.6 => /lib/libm.so.6 (0x40075000)
	libc.so.6 => /lib/libc.so.6 (0x40092000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

Under Caldera 2.3, uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001a000)
	libpthread.so.0 => /lib/libpthread.so.0 (0x40055000)
	libdl.so.2 => /lib/libdl.so.2 (0x40066000)
	libutil.so.1 => /lib/libutil.so.1 (0x4006a000)
	libm.so.6 => /lib/libm.so.6 (0x4006d000)
	libc.so.6 => /lib/libc.so.6 (0x4008a000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

  Now, it may be that something strange is going on since these are
the "virtual environments" on SourceForge.  I'm not sure these are
really the same thing as running those systems.  I'm looking at the
script to start SuSE; there's nothing really there but a chroot call;
perhaps there's a kernel/library mismatch?
  I'll have to see ask about how these are supposed to work a little
more; kernel/libc mismatches could be a real problem in this
environment.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Tue Sep 26 23:52:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 17:52:59 -0400 (EDT)
Subject: [Python-Dev] Crisis aversive
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
Message-ID: <14801.6843.516029.921562@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > sometimes-when-you-hit-the-wall-you-stick-ly y'rs  - tim

  I told you to take off that Velco body armor!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Tue Sep 26 23:57:22 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 26 Sep 2000 17:57:22 -0400 (EDT)
Subject: [Python-Dev] Crisis aversive
References: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
Message-ID: <14801.7106.388711.967339@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> I'm going to take a nap now.  If there's a Windows crisis for
    TP> the duration, mail pleas for urgent assistance to
    TP> bwarsaw at beopen.com -- especially if it involves interactions
    TP> between a Python script running as an NT service and
    TP> python-mode.el under NT Emacs.  Barry *loves* those!

Indeed!  I especially love these because I don't have a working
Windows system at the moment, so every such bug just gets classified
as non-reproducible.

or-"works-for-me"-about-as-well-as-if-i-did-have-windows-ly y'rs,
-Barry



From tommy at ilm.com  Wed Sep 27 00:55:02 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Tue, 26 Sep 2000 15:55:02 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <14801.10496.986326.537462@mace.lucasdigital.com>

Hi All,

Jeremy asked me to send this report (which I originally sent just to
him) along to the rest of python-dev, so here ya go:

------------%< snip %<----------------------%< snip %<------------

Hey Jeremy,

Configured (--without-gcc), made and ran just fine on my IRIX6.5 O2.
The "make test" output indicated a lot of skipped modules since I
didn't do any Setup.in modifications before making everything, and the 
only error came from test_unicodedata:

test test_unicodedata failed -- Writing: 'e052289ecef97fc89c794cf663cb74a64631d34e', expected: 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'

Nothing else that ran had any errors.  Here's the final output:

77 tests OK.
1 test failed: test_unicodedata
24 tests skipped: test_al test_audioop test_cd test_cl test_crypt test_dbm test_dl test_gdbm test_gl test_gzip test_imageop test_imgfile test_linuxaudiodev test_minidom test_nis test_pty test_pyexpat test_rgbimg test_sax test_sunaudiodev test_timing test_winreg test_winsound test_zlib

is there anything I can do to help debug the unicodedata failure?

------------%< snip %<----------------------%< snip %<------------

Jeremy Hylton writes:
| We have tar balls and RPMs available on our private FTP site,
| python.beopen.com.  If you have a chance to test these on your
| platform in the next couple of hours, feedback would be appreciated.
| We've tested on FreeBSD and RH and Mandrake Linux.
| 
| What we're most interested in hearing about is whether it builds
| cleanly and runs the regression test.
| 
| The actual release will occur later today from pythonlabs.com.
| 
| Jeremy
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://www.python.org/mailman/listinfo/python-dev



From jeremy at beopen.com  Wed Sep 27 01:07:03 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 19:07:03 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.10496.986326.537462@mace.lucasdigital.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14801.10496.986326.537462@mace.lucasdigital.com>
Message-ID: <14801.11287.963056.896941@bitdiddle.concentric.net>

I was just talking with Guido who wondered if it might simply be an
optmizer bug with the IRIX compiler.  Does the same problem occur with
optimization turned off?

Jeremy



From tommy at ilm.com  Wed Sep 27 02:01:54 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Tue, 26 Sep 2000 17:01:54 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.11287.963056.896941@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14801.10496.986326.537462@mace.lucasdigital.com>
	<14801.11287.963056.896941@bitdiddle.concentric.net>
Message-ID: <14801.14476.284150.194816@mace.lucasdigital.com>

yes, it does.  I changed this line in the toplevel Makefile:

OPT =	-O -OPT:Olimit=0

to

OPT =

and saw no optimization going on during compiling (yes, I made clean
first) but I got the exact same result from test_unicodedata.


Jeremy Hylton writes:
| I was just talking with Guido who wondered if it might simply be an
| optmizer bug with the IRIX compiler.  Does the same problem occur with
| optimization turned off?
| 
| Jeremy



From gward at python.net  Wed Sep 27 02:11:07 2000
From: gward at python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:11:07 -0400
Subject: [Python-Dev] Stupid distutils bug
Message-ID: <20000926201107.A1179@beelzebub>

No, I mean *really* stupid.  So stupid that I nearly fell out of my
chair with embarassment when I saw Thomas Heller's report of it, because
I released Distutils 0.9.3 *before* reading my mail.  D'oh!

Anyways, this is such a colossally stupid bug that I'm *glad* 2.0b2
hasn't gone out yet: it gives me a chance to checkin the (3-line) fix.
Here's what I plan to do:
  * tag distutils-0_9_3 (ie. last bit of bureaucracy for the
    broken, about-to-be-superseded release)
  * checkin my fix
  * release Distutils 0.9.4 (with this 3-line fix and *nothing* more)
  * tag distutils-0_9_4
  * calmly sit back and wait for Jeremy and Tim to flay me alive

Egg-on-face, paper-bag-on-head, etc. etc...

        Greg

PS. be sure to cc me: I'm doing this from home, but my python-dev
subscription goes to work.

-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From jeremy at beopen.com  Wed Sep 27 02:25:53 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 20:25:53 -0400 (EDT)
Subject: [Python-Dev] Stupid distutils bug
In-Reply-To: <20000926201107.A1179@beelzebub>
References: <20000926201107.A1179@beelzebub>
Message-ID: <14801.16017.841176.232036@bitdiddle.concentric.net>

Greg,

The distribution tarball was cut this afternoon around 2pm.  It's way
to late to change anything in it.  Sorry.

Jeremy



From gward at python.net  Wed Sep 27 02:22:32 2000
From: gward at python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:22:32 -0400
Subject: [Python-Dev] Stupid distutils bug
In-Reply-To: <14801.16017.841176.232036@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Sep 26, 2000 at 08:25:53PM -0400
References: <20000926201107.A1179@beelzebub> <14801.16017.841176.232036@bitdiddle.concentric.net>
Message-ID: <20000926202232.D975@beelzebub>

On 26 September 2000, Jeremy Hylton said:
> The distribution tarball was cut this afternoon around 2pm.  It's way
> to late to change anything in it.  Sorry.

!@$!#!  I didn't see anything on python.org or pythonlabs.com, so I
assumed it wasn't done yet.  Oh well, Distutils 0.9.4 will go out
shortly anyways.  I'll just go off in a corner and castigate myself
mercilessly.  Arghgh!

        Greg
-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From jeremy at beopen.com  Wed Sep 27 02:33:22 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 20:33:22 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.14476.284150.194816@mace.lucasdigital.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14801.10496.986326.537462@mace.lucasdigital.com>
	<14801.11287.963056.896941@bitdiddle.concentric.net>
	<14801.14476.284150.194816@mace.lucasdigital.com>
Message-ID: <14801.16466.928385.529906@bitdiddle.concentric.net>

Sounded too easy, didn't it?  We'll just have to wait for MAL or /F to
followup.

Jeremy



From tim_one at email.msn.com  Wed Sep 27 02:34:51 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 20:34:51 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.10496.986326.537462@mace.lucasdigital.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>

[Victor the Cleaner]
> Jeremy asked me to send this report (which I originally sent just to
> him) along to the rest of python-dev, so here ya go:

Bugs reports should go to SourceForge, else as often as not they'll got
lost.

> ------------%< snip %<----------------------%< snip %<------------
>
> Hey Jeremy,
>
> Configured (--without-gcc), made and ran just fine on my IRIX6.5 O2.
> The "make test" output indicated a lot of skipped modules since I
> didn't do any Setup.in modifications before making everything, and the
> only error came from test_unicodedata:
>
> test test_unicodedata failed -- Writing:
> 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'

The problem appears to be that the test uses the secret "unicode-internal"
encoding, which is dependent upon the big/little-endianess of your platform.
I can reproduce your flawed hash exactly on my platform by replacing this
line:

        h.update(u''.join(data).encode('unicode-internal'))

in test_unicodedata.py's test_methods() with this block:

        import array
        xxx = array.array("H", map(ord, u''.join(data)))
        xxx.byteswap()
        h.update(xxx)

When you do this from a shell:

>>> u"A".encode("unicode-internal")
'A\000'
>>>

I bet you get

'\000A'

Right?





From tim_one at email.msn.com  Wed Sep 27 02:39:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 20:39:49 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.16466.928385.529906@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEEHHIAA.tim_one@email.msn.com>

> Sounded too easy, didn't it?

Not at all:  an optimization bug on SGI is the *usual* outcome <0.5 wink>!

> We'll just have to wait for MAL or /F to followup.

See my earlier mail; the cause is thoroughly understood; it actually means
Unicode is working fine on his machine; but I don't know enough about
Unicode encodings to know how to rewrite the test in a portable way.





From akuchlin at cnri.reston.va.us  Wed Sep 27 02:43:24 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Tue, 26 Sep 2000 20:43:24 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <001b01c02800$f3996000$766940d5@hagrid>; from effbot@telia.com on Tue, Sep 26, 2000 at 11:30:09PM +0200
References: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com> <001b01c02800$f3996000$766940d5@hagrid>
Message-ID: <20000926204324.A20476@newcnri.cnri.reston.va.us>

On Tue, Sep 26, 2000 at 11:30:09PM +0200, Fredrik Lundh wrote:
>on my wishlist for 2.1: an option telling Python to
>ignore all PYTHON* environment variables...)

You could just add an environment variable that did this... dohhh!

--am"Raymound Smullyan"k




From greg at cosc.canterbury.ac.nz  Wed Sep 27 02:51:05 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 27 Sep 2000 12:51:05 +1200 (NZST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <Pine.SOL.4.21.0009261309240.22922-100000@yellow.csi.cam.ac.uk>
Message-ID: <200009270051.MAA23788@s454.cosc.canterbury.ac.nz>

By the way, one of the examples that comes with my
Plex module is an almost-complete Python scanner.
Just thought I'd mention it in case it would help
anyone.

http://www.cosc.canterbury.ac.nz/~greg/python/Plex

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From gward at python.net  Wed Sep 27 02:53:12 2000
From: gward at python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:53:12 -0400
Subject: [Python-Dev] Distutils 1.0 code freeze: Oct 1
Message-ID: <20000926205312.A1470@beelzebub>

Considering the following schedule of events:

  Oct  4: I go out of town (away from email, off the net, etc.)
  Oct 10: planned release of Python 2.0
  Oct 12: I'm back in town, ready to hack! (and wondering why it's
          so quiet around here...)

the Distutils 1.0 release will go out October 1 or 2.  I don't need
quite as much code freeze time as the full Python release, but let's put 
it this way: if there are features you want added to the Distutils that
I don't already know about, forget about it.  Changes currently under
consideration:

  * Rene Liebscher's rearrangement of the CCompiler classes; most
    of this is just reducing the amount of code, but it does
    add some minor features, so it's under consideration.

  * making byte-compilation more flexible: should be able to
    generate both .pyc and .pyo files, and should be able to
    do it at build time or install time (developer's and packager's
    discretion)

If you know about any outstanding Distutils bugs, please tell me *now*.
Put 'em in the SourceForge bug database if you're wondering why I
haven't fixed them yet -- they might have gotten lost, I might not know
about 'em, etc.  If you're not sure, put it in SourceForge.

Stuff that will definitely have to wait until after 1.0:

  * a "test" command (standard test framework for Python modules)

  * finishing the "config" command (auto-configuration)

  * installing package meta-data, to support "what *do* I have
    installed, anyways?" queries, uninstallation, upgrades, etc.

Blue-sky projects:

  * standard documentation processing

  * intra-module dependencies

        Greg
-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From dkwolfe at pacbell.net  Wed Sep 27 07:15:52 2000
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Tue, 26 Sep 2000 22:15:52 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1J00FEC58TA3@mta6.snfc21.pbi.net>

Hi Marc-Andre,

Regarding:

>You could try to enable the macro at the top of unicodectype.c:
> 
>#if defined(macintosh) || defined(MS_WIN64)
>/*XXX This was required to avoid a compiler error for an early Win64
> * cross-compiler that was used for the port to Win64. When the platform is
> * released the MS_WIN64 inclusion here should no longer be necessary.
> */
>/* This probably needs to be defined for some other compilers too. It 
>breaks the
>** 5000-label switch statement up into switches with around 1000 cases each.
>*/
>#define BREAK_SWITCH_UP return 1; } switch (ch) {
>#else
>#define BREAK_SWITCH_UP /* nothing */
>#endif

I've tested it with the BREAK_SWITCH_UP to be true and it fixes the 
problem - same as using the -traditional-cpp.  However, before we commit 
this change I need to see if they are planning on fixing it... remeber 
this Mac OS X is Beta software.... :-)

>If it does compile with the work-around enabled, please
>give us a set of defines which identify the compiler and
>platform so we can enable it per default for your setup.

Auto-make is making me nuts... it's a long way from a GUI for this poor 
old mac guy.  I'll see what I can do.. stay tuned. ;-)

- Dan



From tim_one at email.msn.com  Wed Sep 27 07:39:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 01:39:35 -0400
Subject: [Python-Dev] FW: regarding the Python Developer posting...
In-Reply-To: <0G1J00FEC58TA3@mta6.snfc21.pbi.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEFFHIAA.tim_one@email.msn.com>

[about the big switch in unicodectype.c]

Dan, I'll suggest again that you try working from the current CVS tree
instead.  The giant switch stmt doesn't even exist anymore!  Few developers
are going to volunteer their time to help with code that's already been
replaced.  Talk to Steven Majewski, too -- he's also keen to see this work
on Macs, and knows a lot about Python internals.





From dkwolfe at pacbell.net  Wed Sep 27 09:02:00 2000
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Wed, 27 Sep 2000 00:02:00 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1J0028SA6KS4@mta5.snfc21.pbi.net>

>>[about the big switch in unicodectype.c]
>
>[Tim: use the current CVS tree instead... code's been replace...]

duh! gotta read them archives before seeing following up on an request... 
can't trust the hyper-active Python development team with a code 
freeze.... <wink>

I'm happy to report that it now compiles correctly without a 
-traditional-cpp flag.

Unfortuantely, test_re.py now seg faults.... which is caused by 
test_sre.py... in particular the following:

src/Lib/test/test_sre.py

if verbose:
    print 'Test engine limitations'

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
#test(r"""sre.match(r'(x)*', 50000*'x').span()""",
#   (0, 50000), RuntimeError)
#test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)
#test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)


test_unicodedata fails... same endian problem as SGI...
test_format fails... looks like a problem with the underlying C code.

Here's the config instructions for Mac OS X Public Beta:

Building Python 2.0b1 + CVS
9/26/2000
Dan Wolfe

./configure -with-threads -with-dyld -with-suffix=.exe

change in src/config.h:

/* Define if you have POSIX threads */
#define _POSIX_THREADS 1

to 

/* #define _POSIX_THREADS 1 */

change in src/Makefile

# Compiler options passed to subordinate makes
OPT=		-g -O2 -OPT:Olimit=0

to

OPT=		-g -O2

comment out the following in src/Lib/test/test_sre.py

if verbose:
    print 'Test engine limitations'

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
#test(r"""sre.match(r'(x)*', 50000*'x').span()""",
#   (0, 50000), RuntimeError)
#test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)
#test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)


After install, manually go into /usr/local/bin strip the .exe off the 
installed files.


- Dan






From trentm at ActiveState.com  Wed Sep 27 09:32:33 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 27 Sep 2000 00:32:33 -0700
Subject: [Python-Dev] WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <200009270706.AAA21107@slayer.i.sourceforge.net>; from tmick@users.sourceforge.net on Wed, Sep 27, 2000 at 12:06:06AM -0700
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
Message-ID: <20000927003233.C19872@ActiveState.com>

I was playing with a different SourceForge project and I screwed up my
CVSROOT (used Python's instead). Sorry SOrry!

How do I undo this cleanly? I could 'cvs remove' the README.txt file but that
would still leave the top-level 'black/' turd right? Do the SourceForge admin
guys have to manually kill the 'black' directory in the repository?


or-failing-that-can-my--pet-project-make-it-into-python-2.0-<weak-smile>-ly
yours,
Trent



On Wed, Sep 27, 2000 at 12:06:06AM -0700, Trent Mick wrote:
> Update of /cvsroot/python/black
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv20977
> 
> Log Message:
> first import into CVS
> 
> Status:
> 
> Vendor Tag:	vendor
> Release Tags:	start
> 		
> N black/README.txt
> 
> No conflicts created by this import
> 
> 
> ***** Bogus filespec: -
> ***** Bogus filespec: Imported
> ***** Bogus filespec: sources
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://www.python.org/mailman/listinfo/python-checkins

-- 
Trent Mick
TrentM at ActiveState.com



From effbot at telia.com  Wed Sep 27 10:06:44 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 10:06:44 +0200
Subject: [Python-Dev] FW: regarding the Python Developer posting...
References: <0G1J0028SA6KS4@mta5.snfc21.pbi.net>
Message-ID: <000c01c02859$e1502420$766940d5@hagrid>

dan wrote:
> >[Tim: use the current CVS tree instead... code's been replace...]
> 
> duh! gotta read them archives before seeing following up on an request... 
> can't trust the hyper-active Python development team with a code 
> freeze.... <wink>

heh.  your bug report was the main reason for getting this change
into 2.0b2, and we completely forgot to tell you about it...

> Unfortuantely, test_re.py now seg faults.... which is caused by 
> test_sre.py... in particular the following:
> 
> src/Lib/test/test_sre.py
> 
> if verbose:
>     print 'Test engine limitations'
> 
> # Try nasty case that overflows the straightforward recursive
> # implementation of repeated groups.
> #test(r"""sre.match(r'(x)*', 50000*'x').span()""",
> #   (0, 50000), RuntimeError)
> #test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
> #     (0, 50001), RuntimeError)
> #test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
> #     (0, 50001), RuntimeError)

umm.  I assume it bombs if you uncomment those lines, right?

you could try adding a Mac OS clause to the recursion limit stuff
in Modules/_sre.c:

#if !defined(USE_STACKCHECK)
#if defined(...whatever's needed to detect Max OS X...)
#define USE_RECURSION_LIMIT 5000
#elif defined(MS_WIN64) || defined(__LP64__) || defined(_LP64)
/* require smaller recursion limit for a number of 64-bit platforms:
   Win64 (MS_WIN64), Linux64 (__LP64__), Monterey (64-bit AIX) (_LP64) */
/* FIXME: maybe the limit should be 40000 / sizeof(void*) ? */
#define USE_RECURSION_LIMIT 7500
#else
#define USE_RECURSION_LIMIT 10000
#endif
#endif

replace "...whatever...", and try larger values than 5000 (or smaller,
if necessary.  10000 is clearly too large for your platform).

(alternatively, you can increase the stack size.  maybe it's very small
by default?)

</F>




From larsga at garshol.priv.no  Wed Sep 27 10:12:45 2000
From: larsga at garshol.priv.no (Lars Marius Garshol)
Date: 27 Sep 2000 10:12:45 +0200
Subject: [Python-Dev] Bogus SAX test case
In-Reply-To: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>
References: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>
Message-ID: <m3hf72uubm.fsf@lambda.garshol.priv.no>

* Martin v. Loewis
| 
| <?xml version="1.0" encoding="iso-8859-1"?>
| <ns:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"><ns:doc/>
| 
| (or, alternatively, the element could just be empty). Is that the
| XML that would produce above sequence of SAX events?

Nope, it's not.  No XML document could produce that particular
sequence of events.
 
| It seems to me that this XML is ill-formed, the namespace prefix ns
| is not defined here. Is that analysis correct? 

Not entirely.  The XML is perfectly well-formed, but it's not
namespace-compliant.

| Furthermore, the test checks whether the generator produces
| 
| <?xml version="1.0" encoding="iso-8859-1"?>
| <ns1:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"></ns1:doc>
| 
| It appears that the expected output is bogus; I'd rather expect to get
| the original document back.

What original document? :-)
 
| My proposal would be to correct the test case to pass "ns1:doc" as
| the qname, 

I see that as being the best fix, and have now committed it.

| and to correct the generator to output the qname if that was
| provided by the reader.

We could do that, but the namespace name and the qname are supposed to
be equivalent in any case, so I don't see any reason to change it.
One problem with making that change is that it no longer becomes
possible to roundtrip XML -> pyexpat -> SAX -> xmlgen -> XML because
pyexpat does not provide qnames.

--Lars M.




From tim_one at email.msn.com  Wed Sep 27 10:45:57 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 04:45:57 -0400
Subject: [Python-Dev] 2.0b2 is ... released?
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>

The other guys are sleeping and I'm on vacation.  It *appears* that our West
Coast webmasters may have finished doing their thing, so pending Jeremy's
official announcement perhaps you'd just like to check it out:

    http://www.pythonlabs.com/products/python2.0/

I can't swear it's a release.  *Looks* like one, though!





From fredrik at pythonware.com  Wed Sep 27 11:00:34 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 11:00:34 +0200
Subject: [Python-Dev] 2.0b2 is ... released?
References: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>
Message-ID: <016201c02861$66aee2d0$0900a8c0@SPIFF>


> The other guys are sleeping and I'm on vacation.  It *appears* that our
West
> Coast webmasters may have finished doing their thing, so pending Jeremy's
> official announcement perhaps you'd just like to check it out:
>
>     http://www.pythonlabs.com/products/python2.0/
>
> I can't swear it's a release.  *Looks* like one, though!

the daily URL says so too:

    http://www.pythonware.com/daily/

(but even though we removed some 2.5 megs of unicode stuff,
the new tarball is nearly as large as the previous one.  less filling,
more taste?)

</F>




From fredrik at pythonware.com  Wed Sep 27 11:08:04 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 11:08:04 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
Message-ID: <018401c02862$72311820$0900a8c0@SPIFF>

tim wrote:
> > test test_unicodedata failed -- Writing:
> > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
>
> The problem appears to be that the test uses the secret "unicode-internal"
> encoding, which is dependent upon the big/little-endianess of your
platform.

my fault -- when I saw that, I asked myself "why the heck doesn't mal
just use repr, like I did?" and decided that he used "unicode-escape"
was make to sure the test didn't break if the repr encoding changed.

too bad my brain didn't trust my eyes...

> I can reproduce your flawed hash exactly on my platform by replacing this
> line:
>
>         h.update(u''.join(data).encode('unicode-internal'))

I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
as
anything can be...)

</F>




From tim_one at email.msn.com  Wed Sep 27 11:19:03 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 05:19:03 -0400
Subject: [Python-Dev] 2.0b2 is ... released?
In-Reply-To: <016201c02861$66aee2d0$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFLHIAA.tim_one@email.msn.com>

>> The other guys are sleeping and I'm on vacation.  It *appears* that our
>> West Coast webmasters may have finished doing their thing, so
>> pending Jeremy's official announcement perhaps you'd just like to
>> check it out:
>>
>>     http://www.pythonlabs.com/products/python2.0/
>>
>> I can't swear it's a release.  *Looks* like one, though!

[/F]
> the daily URL says so too:
>
>     http://www.pythonware.com/daily/

Thanks, /F!  I'll *believe* it's a release if I can ever complete
downloading the Windows installer from that site.  S-l-o-w!

> (but even though we removed some 2.5 megs of unicode stuff,
> the new tarball is nearly as large as the previous one.  less filling,
> more taste?)

Heh, I expected *that* one:  the fact that the Unicode stuff was highly
compressible wasn't lost on gzip either.  The Windows installer shrunk less
than 10%, and that includes savings also due to (a) not shipping two full
copies of Lib/ anymore (looked like an ancient stray duplicate line in the
installer script), and (b) not shipping the debug .lib files anymore.
There's a much nicer savings after it's all unpacked, of course.

Hey!  Everyone check out the "what's new in 2.0b2" section!  This was an
incredible amount of good work in a 3-week period, and you should all be
proud of yourselves.  And *especially* proud if you actually helped <wink>.

if-you-just-got-in-the-way-we-love-you-too-ly y'rs  - tim





From mal at lemburg.com  Wed Sep 27 14:13:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 14:13:01 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <018401c02862$72311820$0900a8c0@SPIFF>
Message-ID: <39D1E44D.C7E080D@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > test test_unicodedata failed -- Writing:
> > > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
> >
> > The problem appears to be that the test uses the secret "unicode-internal"
> > encoding, which is dependent upon the big/little-endianess of your
> platform.
> 
> my fault -- when I saw that, I asked myself "why the heck doesn't mal
> just use repr, like I did?" and decided that he used "unicode-escape"
> was make to sure the test didn't break if the repr encoding changed.
> 
> too bad my brain didn't trust my eyes...

repr() would have been a bad choice since the past has shown
that repr() does change. I completely forgot about the endianness
which affects the hash value.
 
> > I can reproduce your flawed hash exactly on my platform by replacing this
> > line:
> >
> >         h.update(u''.join(data).encode('unicode-internal'))
> 
> I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
> as
> anything can be...)

I think UTF-8 will bring about problems with surrogates (that's
why I used the unicode-internal codec). I haven't checked this
though... I'll fix this ASAP.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Wed Sep 27 14:19:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 27 Sep 2000 14:19:42 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 05:50:16PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com> <14801.383.799094.8428@cj42289-a.reston1.va.home.com> <20000926141610.A6557@keymaster.enme.ucalgary.ca> <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>
Message-ID: <20000927141942.M20757@xs4all.nl>

On Tue, Sep 26, 2000 at 05:50:16PM -0400, Fred L. Drake, Jr. wrote:

[ test_fcntl, test_pty and test_openpty failing on SuSe & Caldera Linux ]

>   Now, it may be that something strange is going on since these are
> the "virtual environments" on SourceForge.  I'm not sure these are
> really the same thing as running those systems.  I'm looking at the
> script to start SuSE; there's nothing really there but a chroot call;
> perhaps there's a kernel/library mismatch?

Nope, you almost got it. You were so close, too! It's not a kernel/library
thing, it's the chroot call ;) I'm *guessing* here, but it looks like you
get a faked privileged shell in a chrooted environment, which isn't actualy
privileged (kind of like the FreeBSD 'jail' thing.) It doesn't suprise me
one bit that it fails on those three tests. In fact, I'm (delightedly)
suprised that it didn't fail more tests! But these three require some
close interaction between the kernel, the libc, and the filesystem (instead
of just kernel/fs, libc/fs or kernel/libc.)

It could be anything: security-checks on owner/mode in the kernel,
security-checks on same in libc, or perhaps something sees the chroot and
decides that deception is not going to work in this case. If Sourceforge is
serious about this virtual environment service they probably do want to know
about this, though. I'll see if I can get my SuSe-loving colleague to
compile&test Python on his box, and if that works alright, I think we can
safely claim this is a Sourceforge bug, not a Python one. I don't know
anyone using Caldera, though.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Wed Sep 27 14:20:30 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 14:20:30 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <018401c02862$72311820$0900a8c0@SPIFF> <39D1E44D.C7E080D@lemburg.com>
Message-ID: <39D1E60E.95E04302@lemburg.com>

"M.-A. Lemburg" wrote:
> 
> Fredrik Lundh wrote:
> >
> > tim wrote:
> > > > test test_unicodedata failed -- Writing:
> > > > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > > > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
> > >
> > > The problem appears to be that the test uses the secret "unicode-internal"
> > > encoding, which is dependent upon the big/little-endianess of your
> > platform.
> >
> > my fault -- when I saw that, I asked myself "why the heck doesn't mal
> > just use repr, like I did?" and decided that he used "unicode-escape"
> > was make to sure the test didn't break if the repr encoding changed.
> >
> > too bad my brain didn't trust my eyes...
> 
> repr() would have been a bad choice since the past has shown
> that repr() does change. I completely forgot about the endianness
> which affects the hash value.
> 
> > > I can reproduce your flawed hash exactly on my platform by replacing this
> > > line:
> > >
> > >         h.update(u''.join(data).encode('unicode-internal'))
> >
> > I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
> > as
> > anything can be...)
> 
> I think UTF-8 will bring about problems with surrogates (that's
> why I used the unicode-internal codec). I haven't checked this
> though... I'll fix this ASAP.

UTF-8 works for me. I'll check in a patch.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Wed Sep 27 15:22:56 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 27 Sep 2000 09:22:56 -0400 (EDT)
Subject: [Python-Dev] 2.0b2 is ... released?
In-Reply-To: <016201c02861$66aee2d0$0900a8c0@SPIFF>
References: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>
	<016201c02861$66aee2d0$0900a8c0@SPIFF>
Message-ID: <14801.62640.276852.209527@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > (but even though we removed some 2.5 megs of unicode stuff,
 > the new tarball is nearly as large as the previous one.  less filling,
 > more taste?)

  Umm... Zesty!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Wed Sep 27 18:04:36 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 12:04:36 -0400 (EDT)
Subject: [Python-Dev] Python 2.0b2 is released!
Message-ID: <14802.6804.717866.176697@bitdiddle.concentric.net>

Python 2.0b2 is released.  The BeOpen PythonLabs and our cast of
SourceForge volunteers have fixed many bugs since the 2.0b1 release
three weeks ago.  Please go here to pick up the new release:

    http://www.pythonlabs.com/tech/python2.0/

There's a tarball, a Windows installer, RedHat RPMs, online
documentation, and a long list of fixed bugs.

The final release of Python 2.0 is expected in early- to mid-October.
We would appreciate feedback on the current beta release in order to
fix any remaining bugs before the final release.  Confirmation of
build and test success on less common platforms is also helpful.

Python 2.0 has many new features, including the following:

  - Augmented assignment, e.g. x += 1
  - List comprehensions, e.g. [x**2 for x in range(10)]
  - Extended import statement, e.g. import Module as Name
  - Extended print statement, e.g. print >> file, "Hello"
  - Optional collection of cyclical garbage

This release fixes many known bugs.  The list of open bugs has dropped
to 50, and more than 100 bug reports have been resolved since Python
1.6.  To report a new bug, use the SourceForge bug tracker
http://sourceforge.net/bugs/?func=addbug&group_id=5470

-- Jeremy Hylton <http://www.python.org/~jeremy/>




From jeremy at beopen.com  Wed Sep 27 18:31:35 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 12:31:35 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0b2 is released!
In-Reply-To: <14802.6804.717866.176697@bitdiddle.concentric.net>
References: <14802.6804.717866.176697@bitdiddle.concentric.net>
Message-ID: <14802.8423.701972.950382@bitdiddle.concentric.net>

The correct URL for the Python 2.0b2 release is:
    http://www.pythonlabs.com/products/python2.0/

-- Jeremy Hylton <http://www.python.org/~jeremy/>



From tommy at ilm.com  Wed Sep 27 19:26:53 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 10:26:53 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
References: <14801.10496.986326.537462@mace.lucasdigital.com>
	<LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
Message-ID: <14802.11605.281385.45283@mace.lucasdigital.com>

Tim Peters writes:
| [Victor the Cleaner]
| > Jeremy asked me to send this report (which I originally sent just to
| > him) along to the rest of python-dev, so here ya go:
| 
| Bugs reports should go to SourceForge, else as often as not they'll got
| lost.

Sorry, this wasn't intended to be bug report (not yet, at least).
Jeremy asked for feedback on the release, and that's all I was trying
to give. 


| When you do this from a shell:
| 
| >>> u"A".encode("unicode-internal")
| 'A\000'
| >>>
| 
| I bet you get
| 
| '\000A'
| 
| Right?

Right, as usual. :)  Sounds like MAL already has this one fixed,
too... 



From martin at loewis.home.cs.tu-berlin.de  Wed Sep 27 20:36:04 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 27 Sep 2000 20:36:04 +0200
Subject: [XML-SIG] Re: [Python-Dev] Bogus SAX test case
In-Reply-To: <m3hf72uubm.fsf@lambda.garshol.priv.no> (message from Lars Marius
	Garshol on 27 Sep 2000 10:12:45 +0200)
References: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de> <m3hf72uubm.fsf@lambda.garshol.priv.no>
Message-ID: <200009271836.UAA00872@loewis.home.cs.tu-berlin.de>

> | My proposal would be to correct the test case to pass "ns1:doc" as
> | the qname, 
> 
> I see that as being the best fix, and have now committed it.

Thanks!

> | and to correct the generator to output the qname if that was
> | provided by the reader.
> 
> We could do that, but the namespace name and the qname are supposed to
> be equivalent in any case, so I don't see any reason to change it.

What about

<foo xmlns:mine="martin:von.loewis">
  <bar xmlns:meiner="martin:von.loewis">
    <mine:foobar/>
    <meiner:foobar/>
  </bar>
</foo>

In that case, one of the qnames will change on output when your
algorithm is used - even if the parser provided the original names. By
the way, when parsing this text via

import xml.sax,xml.sax.handler,xml.sax.saxutils,StringIO
p=xml.sax.make_parser()
p.setContentHandler(xml.sax.saxutils.XMLGenerator())
p.setFeature(xml.sax.handler.feature_namespaces,1)
i=xml.sax.InputSource()
i.setByteStream(StringIO.StringIO("""<foo xmlns:mine="martin:von.loewis"><bar xmlns:meiner="martin:von.loewis"><mine:foobar/><meiner:foobar/></bar></foo>"""))
p.parse(i)
print

I get a number of interesting failures. Would you mind looking into
that?

On a related note, it seems that "<xml:hello/>" won't unparse
properly, either...

Regards,
Martin



From mal at lemburg.com  Wed Sep 27 20:53:24 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 20:53:24 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14801.10496.986326.537462@mace.lucasdigital.com>
		<LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <14802.11605.281385.45283@mace.lucasdigital.com>
Message-ID: <39D24224.EAF1E144@lemburg.com>

Victor the Cleaner wrote:
> 
> Tim Peters writes:
> | [Victor the Cleaner]
> | > Jeremy asked me to send this report (which I originally sent just to
> | > him) along to the rest of python-dev, so here ya go:
> |
> | Bugs reports should go to SourceForge, else as often as not they'll got
> | lost.
> 
> Sorry, this wasn't intended to be bug report (not yet, at least).
> Jeremy asked for feedback on the release, and that's all I was trying
> to give.
> 
> | When you do this from a shell:
> |
> | >>> u"A".encode("unicode-internal")
> | 'A\000'
> | >>>
> |
> | I bet you get
> |
> | '\000A'
> |
> | Right?
> 
> Right, as usual. :)  Sounds like MAL already has this one fixed,
> too...

It is fixed in CVS ... don't know if the patch made it into
the release though. The new test now uses UTF-8 as encoding
which is endian-independent.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Wed Sep 27 21:25:54 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 15:25:54 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D24224.EAF1E144@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>

[Victor the Cleaner]
> Sorry, this wasn't intended to be bug report (not yet, at least).
> Jeremy asked for feedback on the release, and that's all I was trying
> to give.

Tommy B, is that you, hiding behind a Victor mask?  Cool!  I was really
directing my rancor at Jeremy <wink>:  by the time he fwd'ed the msg here,
it was already too late to change the release, so it had already switched
from "feedback" to "bug".

[MAL]
> It is fixed in CVS ... don't know if the patch made it into
> the release though. The new test now uses UTF-8 as encoding
> which is endian-independent.

Alas, it was not in the release.  I didn't even know about it until after
the installers were all built and shipped.  Score another for last-second
improvements <0.5 wink>.

Very, very weird:  we all know that SHA is believed to be cryptologically
secure, so there was no feasible way to deduce why the hashes were
different.  But I was coming down with a fever at the time (now in full
bloom, alas), and just stared at the two hashes:

    good:  b88684df19fca8c3d0ab31f040dd8de89f7836fe
    bad:   e052289ecef97fc89c794cf663cb74a64631d34e

Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
is for 'endian'".  Else I wouldn't have had a clue!

should-get-sick-more-often-i-guess-ly y'rs  - tim





From mal at lemburg.com  Wed Sep 27 21:38:13 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 21:38:13 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
Message-ID: <39D24CA5.7F914B7E@lemburg.com>

[Tim Peters wrote about the test_unicodedata.py glitch]:
> 
> [MAL]
> > It is fixed in CVS ... don't know if the patch made it into
> > the release though. The new test now uses UTF-8 as encoding
> > which is endian-independent.
> 
> Alas, it was not in the release.  I didn't even know about it until after
> the installers were all built and shipped.  Score another for last-second
> improvements <0.5 wink>.

You're right. This shouldn't have been applied so close to the
release date/time. Looks like all reviewers work on little
endian machines...
 
> Very, very weird:  we all know that SHA is believed to be cryptologically
> secure, so there was no feasible way to deduce why the hashes were
> different. But I was coming down with a fever at the time (now in full
> bloom, alas), and just stared at the two hashes:
> 
>     good:  b88684df19fca8c3d0ab31f040dd8de89f7836fe
>     bad:   e052289ecef97fc89c794cf663cb74a64631d34e
> 
> Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
> fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
> is for 'endian'".  Else I wouldn't have had a clue!

Well, let's think of it as a hidden feature: the test fails
if and only if it is run on a big endian machine... should
have named the test to something more obvious, e.g.
test_littleendian.py ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Wed Sep 27 21:59:52 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 15:59:52 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D24CA5.7F914B7E@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
Message-ID: <14802.20920.420649.929910@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:

  MAL> [Tim Peters wrote about the test_unicodedata.py glitch]:
  >>
  >> [MAL]
  >> > It is fixed in CVS ... don't know if the patch made it into the
  >> > release though. The new test now uses UTF-8 as encoding which
  >> > is endian-independent.
  >>
  >> Alas, it was not in the release.  I didn't even know about it
  >> until after the installers were all built and shipped.  Score
  >> another for last-second improvements <0.5 wink>.

  MAL> You're right. This shouldn't have been applied so close to the
  MAL> release date/time. Looks like all reviewers work on little
  MAL> endian machines...
 
Yes.  I was bit reckless; test_unicodedata and the latest distutils
checkins had been made in following the official code freeze and were
not being added to fix a showstopper bug.  I should have deferred
them.

We'll have to be a lot more careful about the 2.0 final release.  PEP
200 has a tenative ship date of Oct. 10.  We should probably have a
code freeze on Oct. 6 and leave the weekend and Monday for verifying
that there are no build problems on little- and big-endian platforms.

Jeremy



From skip at mojam.com  Wed Sep 27 22:15:23 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 27 Sep 2000 15:15:23 -0500 (CDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.20920.420649.929910@bitdiddle.concentric.net>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
Message-ID: <14802.21851.446506.215291@beluga.mojam.com>

    Jeremy> We'll have to be a lot more careful about the 2.0 final release.
    Jeremy> PEP 200 has a tenative ship date of Oct. 10.  We should probably
    Jeremy> have a code freeze on Oct. 6 and leave the weekend and Monday
    Jeremy> for verifying that there are no build problems on little- and
    Jeremy> big-endian platforms.

Since you can't test on all platforms, if you fix platform-specific bugs
bettween now and final release, I suggest you make bundles (tar, Windows
installer, whatever) available (without need for CVS) and specifically ask
the people who reported those bugs to check things out using the appropriate
bundle(s).  This is as opposed to making such stuff available and then
posting a general note to the various mailing lists asking people to try
things out.  I think if you're more direct with people who have
"interesting" platforms, you will improve the chances of wringing out a few
more bugs before the actual release.

Skip




From jeremy at beopen.com  Wed Sep 27 23:10:21 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 17:10:21 -0400 (EDT)
Subject: [Python-Dev] buffer overlow in PC/getpathp.c
Message-ID: <14802.25149.170239.848119@bitdiddle.concentric.net>

Mark,

Would you have some time to review PC/getpathp.c for buffer overflow
vulnerabilities?  I just fixed several problems in Modules/getpath.c
that were caused by assuming that certain environment variables and
argv[0] would contain strings less than MAXPATHLEN bytes long.  I
assume the Windows version of the code could have the same
vulnerabilities.  

Jeremy

PS Is there some other Windows expert who could check into this?



From effbot at telia.com  Wed Sep 27 23:41:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 23:41:45 +0200
Subject: [Python-Dev] stupid floating point question...
Message-ID: <001e01c028cb$bd20f620$766940d5@hagrid>

each unicode character has an optional "numeric value",
which may be a fractional value.

the unicodedata module provides a "numeric" function,
which returns a Python float representing this fraction.
this is currently implemented by a large switch stmnt,
containing entries like:

    case 0x2159:
        return (double) 1 / 6;

if I replace the numbers here with integer variables (read
from the character type table) and return the result to
Python, will str(result) be the same thing as before for all
reasonable values?

</F>




From tim_one at email.msn.com  Wed Sep 27 23:39:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 17:39:21 -0400
Subject: [Python-Dev] stupid floating point question...
In-Reply-To: <001e01c028cb$bd20f620$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com>

Try again?  I have no idea what you're asking.  Obviously, str(i) won't look
anything like str(1./6) for any integer i, so *that's* not what you're
asking.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Fredrik Lundh
> Sent: Wednesday, September 27, 2000 5:42 PM
> To: python-dev at python.org
> Subject: [Python-Dev] stupid floating point question...
>
>
> each unicode character has an optional "numeric value",
> which may be a fractional value.
>
> the unicodedata module provides a "numeric" function,
> which returns a Python float representing this fraction.
> this is currently implemented by a large switch stmnt,
> containing entries like:
>
>     case 0x2159:
>         return (double) 1 / 6;
>
> if I replace the numbers here with integer variables (read
> from the character type table) and return the result to
> Python, will str(result) be the same thing as before for all
> reasonable values?
>
> </F>





From effbot at telia.com  Wed Sep 27 23:59:48 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 23:59:48 +0200
Subject: [Python-Dev] stupid floating point question...
References: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com>
Message-ID: <005b01c028ce$4234bb60$766940d5@hagrid>

> Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> look anything like str(1./6) for any integer i, so *that's* not what you're
> asking.

consider this code:

        PyObject* myfunc1(void) {
            return PyFloat_FromDouble((double) A / B);
        }

(where A and B are constants (#defines, or spelled out))

and this code:

        PyObject* myfunc2(int a, int b) {
            return PyFloat_FromDouble((double) a / b);
        }

if I call the latter with a=A and b=B, and pass the resulting
Python float through "str", will I get the same result on all
ANSI-compatible platforms?

(in the first case, the compiler will most likely do the casting
and the division for me, while in the latter case, it's done at
runtime)

</F>




From tommy at ilm.com  Wed Sep 27 23:48:50 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 14:48:50 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.21851.446506.215291@beluga.mojam.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
	<14802.21851.446506.215291@beluga.mojam.com>
Message-ID: <14802.27432.535375.758974@mace.lucasdigital.com>

I'll be happy to test IRIX again when the time comes...

Skip Montanaro writes:
| 
|     Jeremy> We'll have to be a lot more careful about the 2.0 final release.
|     Jeremy> PEP 200 has a tenative ship date of Oct. 10.  We should probably
|     Jeremy> have a code freeze on Oct. 6 and leave the weekend and Monday
|     Jeremy> for verifying that there are no build problems on little- and
|     Jeremy> big-endian platforms.
| 
| Since you can't test on all platforms, if you fix platform-specific bugs
| bettween now and final release, I suggest you make bundles (tar, Windows
| installer, whatever) available (without need for CVS) and specifically ask
| the people who reported those bugs to check things out using the appropriate
| bundle(s).  This is as opposed to making such stuff available and then
| posting a general note to the various mailing lists asking people to try
| things out.  I think if you're more direct with people who have
| "interesting" platforms, you will improve the chances of wringing out a few
| more bugs before the actual release.
| 
| Skip
| 
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://www.python.org/mailman/listinfo/python-dev



From tommy at ilm.com  Wed Sep 27 23:51:23 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 14:51:23 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
References: <39D24224.EAF1E144@lemburg.com>
	<LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
Message-ID: <14802.27466.120918.480152@mace.lucasdigital.com>

Tim Peters writes:
| [Victor the Cleaner]
| > Sorry, this wasn't intended to be bug report (not yet, at least).
| > Jeremy asked for feedback on the release, and that's all I was trying
| > to give.
| 
| Tommy B, is that you, hiding behind a Victor mask?  Cool!  I was really
| directing my rancor at Jeremy <wink>:  by the time he fwd'ed the msg here,
| it was already too late to change the release, so it had already switched
| from "feedback" to "bug".

Yup, it's me.  I've been leary of posting from my work address for a
long time, but Ping seemed to be getting away with it so I figured
"what the hell" ;)

| 
| Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
| fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
| is for 'endian'".  Else I wouldn't have had a clue!

I thought maybe 'e' was for 'eeeeeew' when you realized this was IRIX ;)

| 
| should-get-sick-more-often-i-guess-ly y'rs  - tim

Or just stay sick.  That's what I do...



From tim_one at email.msn.com  Thu Sep 28 00:08:50 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 18:08:50 -0400
Subject: [Python-Dev] stupid floating point question...
In-Reply-To: <005b01c028ce$4234bb60$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEIIHIAA.tim_one@email.msn.com>

Ah!  I wouldn't worry about this -- go right ahead.  Not only the str()'s,
but even the repr()'s, are very likely to be identical.

A *good* compiler won't collapse *any* fp expressions at compile-time,
because doing so can change the 754 semantics at runtime (for example, the
evaluation of 1./6 triggers the 754 "inexact" signal, and the compiler has
no way to know whether the user is expecting that to happen at runtime, so a
good compiler will leave it alone ... at KSR, I munged our C compiler to
*try* collapsing at compile-time, capturing the 754 state before and
comparing it to the 754 state after, doing that again for each possible
rounding mode, and leaving the runtime code in if and only if any evaluation
changed any state; but, that was a *damned* good compiler <wink>).

> -----Original Message-----
> From: Fredrik Lundh [mailto:effbot at telia.com]
> Sent: Wednesday, September 27, 2000 6:00 PM
> To: Tim Peters; python-dev at python.org
> Subject: Re: [Python-Dev] stupid floating point question...
>
>
> > Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> > look anything like str(1./6) for any integer i, so *that's* not
> > what you're asking.
>
> consider this code:
>
>         PyObject* myfunc1(void) {
>             return PyFloat_FromDouble((double) A / B);
>         }
>
> (where A and B are constants (#defines, or spelled out))
>
> and this code:
>
>         PyObject* myfunc2(int a, int b) {
>             return PyFloat_FromDouble((double) a / b);
>         }
>
> if I call the latter with a=A and b=B, and pass the resulting
> Python float through "str", will I get the same result on all
> ANSI-compatible platforms?
>
> (in the first case, the compiler will most likely do the casting
> and the division for me, while in the latter case, it's done at
> runtime)





From mal at lemburg.com  Thu Sep 28 00:08:42 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 28 Sep 2000 00:08:42 +0200
Subject: [Python-Dev] stupid floating point question...
References: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com> <005b01c028ce$4234bb60$766940d5@hagrid>
Message-ID: <39D26FEA.E17675AA@lemburg.com>

Fredrik Lundh wrote:
> 
> > Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> > look anything like str(1./6) for any integer i, so *that's* not what you're
> > asking.
> 
> consider this code:
> 
>         PyObject* myfunc1(void) {
>             return PyFloat_FromDouble((double) A / B);
>         }
> 
> (where A and B are constants (#defines, or spelled out))
> 
> and this code:
> 
>         PyObject* myfunc2(int a, int b) {
>             return PyFloat_FromDouble((double) a / b);
>         }
> 
> if I call the latter with a=A and b=B, and pass the resulting
> Python float through "str", will I get the same result on all
> ANSI-compatible platforms?
> 
> (in the first case, the compiler will most likely do the casting
> and the division for me, while in the latter case, it's done at
> runtime)

Casts have a higher precedence than e.g. /, so (double)a/b will
be compiled as ((double)a)/b.

If you'd rather play safe, just add the extra parenthesis.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From m.favas at per.dem.csiro.au  Thu Sep 28 00:08:01 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 28 Sep 2000 06:08:01 +0800
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
Message-ID: <39D26FC1.B8214C80@per.dem.csiro.au>

Jeremy writes...
We'll have to be a lot more careful about the 2.0 final release.  PEP
200 has a tenative ship date of Oct. 10.  We should probably have a
code freeze on Oct. 6 and leave the weekend and Monday for verifying
that there are no build problems on little- and big-endian platforms.

... and 64-bit platforms (or those where sizeof(long) != sizeof(int) !=
4) <grin> - a change yesterday to md5.h caused a compilation failure.
Logged as 
http://sourceforge.net/bugs/?func=detailbug&bug_id=115506&group_id=5470

-- 
Mark Favas  -   m.favas at per.dem.csiro.au
CSIRO, Private Bag No 5, Wembley, Western Australia 6913, AUSTRALIA



From tim_one at email.msn.com  Thu Sep 28 00:40:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 18:40:10 -0400
Subject: [Python-Dev] Python 2.0b2 note for Windows developers
Message-ID: <LNBBLJKPBEHFEDALKOLCCEILHIAA.tim_one@email.msn.com>

Since most Python users on Windows don't have any use for them, I trimmed
the Python 2.0b2 installer by leaving out the debug-build .lib, .pyd, .exe
and .dll files.  If you want them, they're available in a separate zip
archive; read the Windows Users notes at

http://www.pythonlabs.com/products/python2.0/download_python2.0b2.html

for info and a download link.  If you don't already know *why* you might
want them, trust me:  you don't want them <wink>.

they-don't-even-make-good-paperweights-ly y'rs  - tim





From jeremy at beopen.com  Thu Sep 28 04:55:57 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 22:55:57 -0400
Subject: [Python-Dev] RE: buffer overlow in PC/getpathp.c
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEFFDLAA.MarkH@ActiveState.com>
Message-ID: <AJEAKILOCCJMDILAPGJNOEOICBAA.jeremy@beopen.com>

>I would be happy to!  Although I am happy to report that I believe it
>safe - I have been very careful of this from the time I wrote it.
>
>What is the process?  How formal should it be?

Not sure how formal it should be, but I would recommend you review uses of
strcpy and convince yourself that the source string is never longer than the
target buffer.  I am not convinced.  For example, in calculate_path(), char
*pythonhome is initialized from an environment variable and thus has unknown
length.  Later it used in a strcpy(prefix, pythonhome), where prefix has a
fixed length.  This looks like a vulnerability than could be closed by using
strncpy(prefix, pythonhome, MAXPATHLEN).

The Unix version of this code had three or four vulnerabilities of this
sort.  So I imagine the Windows version has those too.  I was imagining that
the registry offered a whole new opportunity to provide unexpectedly long
strings that could overflow buffers.

Jeremy





From MarkH at ActiveState.com  Thu Sep 28 04:53:08 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 28 Sep 2000 13:53:08 +1100
Subject: [Python-Dev] RE: buffer overlow in PC/getpathp.c
In-Reply-To: <AJEAKILOCCJMDILAPGJNOEOICBAA.jeremy@beopen.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEGADLAA.MarkH@ActiveState.com>

> target buffer.  I am not convinced.  For example, in
> calculate_path(), char
> *pythonhome is initialized from an environment variable and thus

Oh - ok - sorry.  I was speaking from memory.  From memory, I believe you
will find the registry functions safe - but likely not the older
environment based stuff, I agree.

I will be happy to look into this.

Mark.




From fdrake at beopen.com  Thu Sep 28 04:57:46 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 27 Sep 2000 22:57:46 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D26FC1.B8214C80@per.dem.csiro.au>
References: <39D26FC1.B8214C80@per.dem.csiro.au>
Message-ID: <14802.45994.485874.454963@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > We'll have to be a lot more careful about the 2.0 final release.  PEP
 > 200 has a tenative ship date of Oct. 10.  We should probably have a
 > code freeze on Oct. 6 and leave the weekend and Monday for verifying
 > that there are no build problems on little- and big-endian platforms.

  And hopefully we'll have a SPARC machine available before then, but
the timeframe is uncertain.

Mark Favas writes:
 > ... and 64-bit platforms (or those where sizeof(long) != sizeof(int) !=
 > 4) <grin> - a change yesterday to md5.h caused a compilation failure.

  I just checked in a patch based on Tim's comment on this; please
test this on your machine if you can.  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From dkwolfe at pacbell.net  Thu Sep 28 17:08:52 2000
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Thu, 28 Sep 2000 08:08:52 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1L00JDRRD23W@mta6.snfc21.pbi.net>

>> [Seg faults in test_sre.py while testing limits]
>> 
>you could try adding a Mac OS clause to the recursion limit stuff
>in Modules/_sre.c:
>
>#if !defined(USE_STACKCHECK)
>#if defined(...whatever's needed to detect Max OS X...)
>#define USE_RECURSION_LIMIT 5000
>#elif defined(MS_WIN64) || defined(__LP64__) || defined(_LP64)
>/* require smaller recursion limit for a number of 64-bit platforms:
>   Win64 (MS_WIN64), Linux64 (__LP64__), Monterey (64-bit AIX) (_LP64) */
>/* FIXME: maybe the limit should be 40000 / sizeof(void*) ? */
>#define USE_RECURSION_LIMIT 7500
>#else
>#define USE_RECURSION_LIMIT 10000
>#endif
>#endif
>
>replace "...whatever...", and try larger values than 5000 (or smaller,
>if necessary.  10000 is clearly too large for your platform).
>
>(alternatively, you can increase the stack size.  maybe it's very small
>by default?)

Hi /F,

I spotted the USE_STACKCHECK, got curious, and went hunting for it... of 
course curiousity kills the cat... it's time to got to work now.... 
meaning that the large number of replies, counter-replies, code and 
follow up that I'm going to need to wade thru is going to have to wait.

Why you ask, well when you strip Mac OS X down to the core... it's unix 
based and therefore has the getrusage command... which means that I need 
to take a look at some of the patches - 
<http://sourceforge.net/patch/download.php?id=101352>

In the Public Beta the stack size is currently set to 512K by default... 
which is usually enough for most processes... but not sre...

I-should-have-stayed-up-all-night'ly yours,

- Dan



From loewis at informatik.hu-berlin.de  Thu Sep 28 17:37:10 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Thu, 28 Sep 2000 17:37:10 +0200 (MET DST)
Subject: [Python-Dev] stupid floating point question...
Message-ID: <200009281537.RAA21436@pandora.informatik.hu-berlin.de>

> A *good* compiler won't collapse *any* fp expressions at
> compile-time, because doing so can change the 754 semantics at
> runtime (for example, the evaluation of 1./6 triggers the 754
> "inexact" signal, and the compiler has no way to know whether the
> user is expecting that to happen at runtime, so a good compiler will
> leave it alone

Of course, that doesn't say anything about what *most* compilers do.
For example, gcc, on i586-pc-linux-gnu, compiles

double foo(){
	return (double)1/6;
}

into

.LC0:
	.long 0x55555555,0x3fc55555
.text
	.align 4
.globl foo
	.type	 foo, at function
foo:
	fldl .LC0
	ret

when compiling with -fomit-frame-pointer -O2. That still doesn't say
anything about what most compilers do - if there is interest, we could
perform a comparative study on the subject :-)

The "would break 754" argument is pretty weak, IMO - gcc, for example,
doesn't claim to comply to that standard.

Regards,
Martin




From jeremy at beopen.com  Thu Sep 28 18:58:48 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 28 Sep 2000 12:58:48 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.21851.446506.215291@beluga.mojam.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
	<14802.21851.446506.215291@beluga.mojam.com>
Message-ID: <14803.30920.93791.816163@bitdiddle.concentric.net>

>>>>> "SM" == Skip Montanaro <skip at mojam.com> writes:

  Jeremy> We'll have to be a lot more careful about the 2.0 final
  Jeremy> release.  PEP 200 has a tenative ship date of Oct. 10.  We
  Jeremy> should probably have a code freeze on Oct. 6 and leave the
  Jeremy> weekend and Monday for verifying that there are no build
  Jeremy> problems on little- and big-endian platforms.

  SM> Since you can't test on all platforms, if you fix
  SM> platform-specific bugs bettween now and final release, I suggest
  SM> you make bundles (tar, Windows installer, whatever) available
  SM> (without need for CVS) and specifically ask the people who
  SM> reported those bugs to check things out using the appropriate
  SM> bundle(s).

Good idea!  I've set up a cron job that will build a tarball every
night at 3am and place it on the ftp server at python.beopen.com:
    ftp://python.beopen.com/pub/python/snapshots/

I've started things off with a tar ball I built just now.
    Python-2.0b2-devel-2000-09-28.tar.gz

Tommy -- Could you use this snapshot to verify that the unicode test
is fixed?

Jeremy




From thomas.heller at ion-tof.com  Thu Sep 28 19:05:02 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Thu, 28 Sep 2000 19:05:02 +0200
Subject: [Python-Dev] Re: [Distutils] Distutils 1.0 code freeze: Oct 1
References: <20000926205312.A1470@beelzebub>
Message-ID: <02af01c0296e$40cf1b30$4500a8c0@thomasnb>

> If you know about any outstanding Distutils bugs, please tell me *now*.
> Put 'em in the SourceForge bug database if you're wondering why I
> haven't fixed them yet -- they might have gotten lost, I might not know
> about 'em, etc.  If you're not sure, put it in SourceForge.

Mike Fletcher found a another bug: Building extensions on windows
(at least with MSVC) in debug mode link with the wrong python
import library. This leads to crashes because the extension
loads the wrong python dll at runtime.

Will report this on sourceforge, although I doubt Greg will be able
to fix this...

Distutils code freeze: Greg, I have some time next week to work on
this. Do you give me permission to check it in if I find a solution?

Thomas




From martin at loewis.home.cs.tu-berlin.de  Thu Sep 28 21:32:00 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 28 Sep 2000 21:32:00 +0200
Subject: [Python-Dev] Dynamically loaded extension modules on MacOS X
Message-ID: <200009281932.VAA01999@loewis.home.cs.tu-berlin.de>

Has anybody succeeded in building extension modules for 2.0b1 on MacOS
X? On xml-sig, we had a report that the pyexpat module would not build
dynamically when building was initiated by the distutils, see the
report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=115544&group_id=6473

Essentially, Python was configured with "-with-threads -with-dyld
-with-suffix=.exe", which causes extension modules to be linked as

cc -bundle -prebind {object files} -o {target}.so

With this linker line, the linker reported

/usr/bin/ld: warning -prebind has no effect with -bundle

and then

/usr/bin/ld: Undefined symbols:
_PyArg_ParseTuple
_PyArg_ParseTupleAndKeywords
...*removed a few dozen more symbols*...

So apparently the command line options are bogus for the compiler,
which identifies itself as

    Reading specs from /usr/libexec/ppc/2.95.2/specs
    Apple Computer, Inc. version cc-796.3, based on gcc driver version
     2.7.2.1 executing gcc version 2.95.2

Also, these options apparently won't cause creation of a shared
library. I wonder whether a simple "cc -shared" won't do the trick -
can a Mac expert enlighten me?

Regards,
Martin



From tommy at ilm.com  Thu Sep 28 21:38:54 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Thu, 28 Sep 2000 12:38:54 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14803.30920.93791.816163@bitdiddle.concentric.net>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
	<14802.21851.446506.215291@beluga.mojam.com>
	<14803.30920.93791.816163@bitdiddle.concentric.net>
Message-ID: <14803.40496.957808.858138@mace.lucasdigital.com>

Jeremy Hylton writes:
| 
| I've started things off with a tar ball I built just now.
|     Python-2.0b2-devel-2000-09-28.tar.gz
| 
| Tommy -- Could you use this snapshot to verify that the unicode test
| is fixed?


Sure thing.  I just tested it and it passed test_unicodedata.  Looks
good on this end...



From tim_one at email.msn.com  Thu Sep 28 21:59:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 15:59:55 -0400
Subject: [Python-Dev] RE: stupid floating point question...
In-Reply-To: <200009281537.RAA21436@pandora.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELFHIAA.tim_one@email.msn.com>

[Tim]
> A *good* compiler won't collapse *any* fp expressions at
> compile-time ...

[Martin von Loewis]
> Of course, that doesn't say anything about what *most* compilers do.

Doesn't matter in this case; I told /F not to worry about it having taken
that all into account.  Almost all C compilers do a piss-poor job of taking
floating-point seriously, but it doesn't really matter for the purpose /F
has in mind.

[an example of gcc precomputing the best possible result]
> 	return (double)1/6;
> ...
> 	.long 0x55555555,0x3fc55555

No problem.  If you set the HW rounding mode to +infinity during
compilation, the first chunk there would end with a 6 instead.  Would affect
the tail end of the repr(), but not the str().

> ...
> when compiling with -fomit-frame-pointer -O2. That still doesn't say
> anything about what most compilers do - if there is interest, we could
> perform a comparative study on the subject :-)

No need.

> The "would break 754" argument is pretty weak, IMO - gcc, for example,
> doesn't claim to comply to that standard.

/F's question was about fp.  754 is the only hope he has for any x-platform
consistency (C89 alone gives no hope at all, and no basis for answering his
question).  To the extent that a C compiler ignores 754, it makes x-platform
fp consistency impossible (which, btw, Python inherits from C:  we can't
even manage to get string<->float working consistently across 100%
754-conforming platforms!).  Whether that's a weak argument or not depends
entirely on how important x-platform consistency is to a given app.  In /F's
specific case, a sloppy compiler is "good enough".

i'm-the-only-compiler-writer-i-ever-met-who-understood-fp<0.5-wink>-ly
    y'rs  - tim





From effbot at telia.com  Thu Sep 28 22:40:34 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 22:40:34 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCGELFHIAA.tim_one@email.msn.com>
Message-ID: <004f01c0298c$62ba2320$766940d5@hagrid>

tim wrote:
> > Of course, that doesn't say anything about what *most* compilers do.
> 
> Doesn't matter in this case; I told /F not to worry about it having taken
> that all into account.  Almost all C compilers do a piss-poor job of taking
> floating-point seriously, but it doesn't really matter for the purpose /F
> has in mind.

to make it clear for everyone: I'm planning to get rid of the last
remaining switch statement in unicodectype.c ("numerical value"),
and replace the doubles in there with rationals.

the problem here is that MAL's new test suite uses "str" on the
return value from that function, and it would a bit annoying if we
ended up with a Unicode test that might fail on platforms with
lousy floating point support...

:::

on the other hand, I'm not sure I think it's a really good idea to
have "numeric" return a floating point value.  consider this:

>>> import unicodedata
>>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
0.33333333333333331

(the glyph looks like "1/3", and that's also what the numeric
property field in the Unicode database says)

:::

if I had access to the time machine, I'd change it to:

>>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
(1, 3)

...but maybe we can add an alternate API that returns the
*exact* fraction (as a numerator/denominator tuple)?

>>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
(1, 3)

(hopefully, someone will come up with a better name)

</F>




From ping at lfw.org  Thu Sep 28 22:35:24 2000
From: ping at lfw.org (The Ping of Death)
Date: Thu, 28 Sep 2000 15:35:24 -0500 (CDT)
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
In-Reply-To: <004f01c0298c$62ba2320$766940d5@hagrid>
Message-ID: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>

On Thu, 28 Sep 2000, Fredrik Lundh wrote:
> if I had access to the time machine, I'd change it to:
> 
> >>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
> 
> ...but maybe we can add an alternate API that returns the
> *exact* fraction (as a numerator/denominator tuple)?
> 
> >>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
> 
> (hopefully, someone will come up with a better name)

unicodedata.rational might be an obvious choice.

    >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
    (1, 3)


-- ?!ng




From tim_one at email.msn.com  Thu Sep 28 22:52:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 16:52:28 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>

[/F]
> ...but maybe we can add an alternate API that returns the
> *exact* fraction (as a numerator/denominator tuple)?
>
> >>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
>
> (hopefully, someone will come up with a better name)

[The Ping of Death]

LOL!  Great name, Ping.

> unicodedata.rational might be an obvious choice.
>
>     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
>     (1, 3)

Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
too.

leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
    ly y'ts  - the timmy of death





From thomas at xs4all.net  Thu Sep 28 22:53:30 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 28 Sep 2000 22:53:30 +0200
Subject: [Python-Dev] 2.0b2 on Slackware 7.0
In-Reply-To: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 04:32:21PM -0400
References: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>
Message-ID: <20000928225330.A26568@xs4all.nl>

On Tue, Sep 26, 2000 at 04:32:21PM -0400, Fred L. Drake, Jr. wrote:

>   I just built and tested 2.0b2 on Slackware 7.0, and found that
> threads failed miserably.  I got the message:

> pthread_cond_wait: Interrupted system call

>   If anyone has any ideas, please send them along!  I'll turn this
> into a real bug report later.

I'm inclined to nudge this towards a libc bug... The exact version of glibc
Slackware 7 uses would be important, in that case. Redhat has been using
glibc 2.1.3 for a while, which seems stable, but I have no clue what
Slackware is using nowadays (I believe they were one of the last
of the major distributions to move to glibc, but I might be mistaken.) And
then there is the possibility of optimization bugs in the gcc that compiled
Python or the gcc that compiled the libc/libpthreads. 

(That last bit is easy to test though: copy the python binary from a working
linux machine with the same kernel major version & libc major version. If it
works, it's an optimization bug. If it works bug exhibits the same bug, it's
probably libc/libpthreads causing it somehow. If it fails to start
altogether, Slackware is using strange libs (and they might be the cause of
the bug, or might be just the *exposer* of the bug.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Thu Sep 28 23:14:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 23:14:45 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>
Message-ID: <00cb01c02991$23f61360$766940d5@hagrid>

tim wrote:
> leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
>     ly y'ts  - the timmy of death

oh, the unicode folks have figured that one out:

>>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character

</F>




From effbot at telia.com  Thu Sep 28 23:49:13 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 23:49:13 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>
Message-ID: <002a01c02996$9b1742c0$766940d5@hagrid>

tim wrote:
> > unicodedata.rational might be an obvious choice.
> >
> >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> >     (1, 3)
> 
> Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
> too.

should I interpret this as a +1, or should I write a PEP on
this topic? ;-)

</F>




From tim_one at email.msn.com  Fri Sep 29 00:12:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:12:23 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <00cb01c02991$23f61360$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELNHIAA.tim_one@email.msn.com>

[tim]
> leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
>     ly y'ts  - the timmy of death

[/F]
> oh, the unicode folks have figured that one out:
>
> >>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character

Ya, except I'm starting to suspect they're not floating-point experts
either:

>>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character
>>> unicodedata.numeric(u"\N{EULER CONSTANT}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character
>>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
UnicodeError: Unicode-Escape decoding error: Invalid Unicode Character Name
>>>





From mal at lemburg.com  Fri Sep 29 00:30:03 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:30:03 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating 
 pointquestion...)
References: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>
Message-ID: <39D3C66B.3A3350AE@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > unicodedata.rational might be an obvious choice.
> > >
> > >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> > >     (1, 3)
> >
> > Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
> > too.
> 
> should I interpret this as a +1, or should I write a PEP on
> this topic? ;-)

+1 from here. 

I really only chose floats to get all possibilities (digit, decimal
and fractions) into one type... Python should support rational numbers
some day.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Sep 29 00:32:50 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:32:50 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <002a01c02996$9b1742c0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELNHIAA.tim_one@email.msn.com>

[The Ping of Death suggests unicodedata.rational]
>     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
>     (1, 3)

[Timmy replies]
> Perfect -- another great name.  Beats all heck out of
> unicodedata.vulgar() too.

[/F inquires]
> should I interpret this as a +1, or should I write a PEP on
> this topic? ;-)

I'm on vacation (but too ill to do much besides alternate sleep & email
<snarl>), and I'm not sure we have clear rules about how votes from
commercial Python developers count when made on their own time.  Perhaps a
meta-PEP first to resolve that issue?

Oh, all right, just speaking for myself, I'm +1 on The Ping of Death's name
suggestion provided this function is needed at all.  But not being a Unicode
Guy by nature, I have no opinion on whether the function *is* needed (I
understand how digits work in American English, and ord(ch)-ord('0') is the
limit of my experience; can't say whether even the current .numeric() is
useful for Klingons or Lawyers or whoever it is who expects to get a numeric
value out of a character for 1/2 or 1/3).





From mal at lemburg.com  Fri Sep 29 00:33:50 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:33:50 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point 
 question...)
References: <LNBBLJKPBEHFEDALKOLCCELNHIAA.tim_one@email.msn.com>
Message-ID: <39D3C74E.B1952909@lemburg.com>

Tim Peters wrote:
> 
> [tim]
> > leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
> >     ly y'ts  - the timmy of death
> 
> [/F]
> > oh, the unicode folks have figured that one out:
> >
> > >>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> 
> Ya, except I'm starting to suspect they're not floating-point experts
> either:
> 
> >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> UnicodeError: Unicode-Escape decoding error: Invalid Unicode Character Name
> >>>

Perhaps you should submit these for Unicode 4.0 ;-)

But really, I don't suspect that anyone is going to do serious
character to number conversion on these esoteric characters. Plain
old digits will do just as they always have (or does anyone know
of ways to represent irrational numbers on PCs by other means than
an algorithm which spits out new digits every now and then ?).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 00:38:47 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:38:47 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point 
 question...)
References: <LNBBLJKPBEHFEDALKOLCOELNHIAA.tim_one@email.msn.com>
Message-ID: <39D3C877.BDBC52DF@lemburg.com>

Tim Peters wrote:
> 
> [The Ping of Death suggests unicodedata.rational]
> >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> >     (1, 3)
> 
> [Timmy replies]
> > Perfect -- another great name.  Beats all heck out of
> > unicodedata.vulgar() too.
> 
> [/F inquires]
> > should I interpret this as a +1, or should I write a PEP on
> > this topic? ;-)
> 
> I'm on vacation (but too ill to do much besides alternate sleep & email
> <snarl>), and I'm not sure we have clear rules about how votes from
> commercial Python developers count when made on their own time.  Perhaps a
> meta-PEP first to resolve that issue?
> 
> Oh, all right, just speaking for myself, I'm +1 on The Ping of Death's name
> suggestion provided this function is needed at all.  But not being a Unicode
> Guy by nature, I have no opinion on whether the function *is* needed (I
> understand how digits work in American English, and ord(ch)-ord('0') is the
> limit of my experience; can't say whether even the current .numeric() is
> useful for Klingons or Lawyers or whoever it is who expects to get a numeric
> value out of a character for 1/2 or 1/3).

The reason for "numeric" being available at all is that the
UnicodeData.txt file format specifies such a field. I don't believe
anyone will make serious use of it though... e.g. 2? would parse as 22
and not evaluate to 4.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Sep 29 00:48:08 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:48:08 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
In-Reply-To: <39D3C74E.B1952909@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>

[Tim]
> >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> UnicodeError: Unicode-Escape decoding error: Invalid Unicode
                Character Name

[MAL]
> Perhaps you should submit these for Unicode 4.0 ;-)

Note that the first two are already there; they just don't have an
associated numerical value.  The last one was a hint that I was trying to
write a frivolous msg while giving my "<wink>" key a break <wink>.

> But really, I don't suspect that anyone is going to do serious
> character to number conversion on these esoteric characters. Plain
> old digits will do just as they always have ...

Which is why I have to wonder whether there's *any* value in exposing the
numeric-value property beyond regular old digits.





From MarkH at ActiveState.com  Fri Sep 29 03:36:11 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 29 Sep 2000 12:36:11 +1100
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>

Hi all,
	I'd like some feedback on a patch assigned to me.  It is designed to
prevent Python extensions built for an earlier version of Python from
crashing the new version.

I haven't actually tested the patch, but I am sure it works as advertised
(who is db31 anyway?).

My question relates more to the "style" - the patch locates the new .pyd's
address in memory, and parses through the MS PE/COFF format, locating the
import table.  If then scans the import table looking for Pythonxx.dll, and
compares any found entries with the current version.

Quite clever - a definite plus is that is should work for all old and
future versions (of Python - dunno about Windows ;-) - but do we want this
sort of code in Python?  Is this sort of hack, however clever, going to
some back and bite us?

Second related question:  if people like it, is this feature something we
can squeeze in for 2.0?

If there are no objections to any of this, I am happy to test it and check
it in - but am not confident of doing so without some feedback.

Thanks,

Mark.




From MarkH at ActiveState.com  Fri Sep 29 03:42:01 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 29 Sep 2000 12:42:01 +1100
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEIIDLAA.MarkH@ActiveState.com>

> Hi all,
> 	I'd like some feedback on a patch assigned to me.

sorry -
http://sourceforge.net/patch/?func=detailpatch&patch_id=101676&group_id=547
0

Mark.




From tim_one at email.msn.com  Fri Sep 29 04:24:24 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 22:24:24 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMHHIAA.tim_one@email.msn.com>

This is from 2.0b2 Windows, and typical:

C:\Python20>python -v
# C:\PYTHON20\lib\site.pyc has bad magic
import site # from C:\PYTHON20\lib\site.py
# wrote C:\PYTHON20\lib\site.pyc
# C:\PYTHON20\lib\os.pyc has bad magic
import os # from C:\PYTHON20\lib\os.py
# wrote C:\PYTHON20\lib\os.pyc
import nt # builtin
# C:\PYTHON20\lib\ntpath.pyc has bad magic
import ntpath # from C:\PYTHON20\lib\ntpath.py
# wrote C:\PYTHON20\lib\ntpath.pyc
# C:\PYTHON20\lib\stat.pyc has bad magic
import stat # from C:\PYTHON20\lib\stat.py
# wrote C:\PYTHON20\lib\stat.pyc
# C:\PYTHON20\lib\string.pyc has bad magic
import string # from C:\PYTHON20\lib\string.py
# wrote C:\PYTHON20\lib\string.pyc
import strop # builtin
# C:\PYTHON20\lib\UserDict.pyc has bad magic
import UserDict # from C:\PYTHON20\lib\UserDict.py
# wrote C:\PYTHON20\lib\UserDict.pyc
Python 2.0b2 (#6, Sep 26 2000, 14:59:21) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>>

That is, .pyc's don't work at all anymore on Windows:  Python *always*
thinks they have a bad magic number.  Elsewhere?

Also noticed that test_popen2 got broken on Windows after 2.0b2, for a very
weird reason:

C:\Code\python\dist\src\PCbuild>python ../lib/test/test_popen2.py
Test popen2 module:
testing popen2...
testing popen3...
Traceback (most recent call last):
  File "../lib/test/test_popen2.py", line 64, in ?
    main()
  File "../lib/test/test_popen2.py", line 23, in main
    popen2._test()
  File "c:\code\python\dist\src\lib\popen2.py", line 188, in _test
    for inst in _active[:]:
NameError: There is no variable named '_active'

C:\Code\python\dist\src\PCbuild>

C:\Code\python\dist\src\PCbuild>python ../lib/popen2.py
testing popen2...
testing popen3...
Traceback (most recent call last):
  File "../lib/popen2.py", line 195, in ?
    _test()
  File "../lib/popen2.py", line 188, in _test
    for inst in _active[:]:
NameError: There is no variable named '_active'

C:\Code\python\dist\src\PCbuild>

Ah!  That's probably because of this clever new code:

if sys.platform[:3] == "win":
    # Some things don't make sense on non-Unix platforms.
    del Popen3, Popen4, _active, _cleanup

If I weren't on vacation, I'd check in a fix <wink>.





From fdrake at beopen.com  Fri Sep 29 04:25:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 22:25:00 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <20000927003233.C19872@ActiveState.com>
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
Message-ID: <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>

Trent Mick writes:
 > I was playing with a different SourceForge project and I screwed up my
 > CVSROOT (used Python's instead). Sorry SOrry!

  Well, you blew it.  Don't worry, we'll have you kicked off
SourceForge in no time!  ;)
  Well, maybe not.  I've submitted a support request to fix this:

http://sourceforge.net/support/?func=detailsupport&support_id=106112&group_id=1


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From m.favas at per.dem.csiro.au  Fri Sep 29 04:49:54 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 29 Sep 2000 10:49:54 +0800
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
Message-ID: <39D40352.5C511629@per.dem.csiro.au>

Tim writes:
That is, .pyc's don't work at all anymore on Windows:  Python *always*
thinks they have a bad magic number.  Elsewhere?

Just grabbed the latest from CVS - .pyc is still fine on Tru64 Unix...

Mark
-- 
Email - m.favas at per.dem.csiro.au       Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074           CSIRO Exploration &
Mining
Fax   - +61 8 9387 8642                         Private Bag No 5
                                                Wembley, Western
Australia 6913



From nhodgson at bigpond.net.au  Fri Sep 29 05:58:41 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Fri, 29 Sep 2000 13:58:41 +1000
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <045201c029c9$8f49fd10$8119fea9@neil>

[Tim]
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

   Running (in IDLE or PythonWin with a font that covers most of Unicode
like Tahoma):
import unicodedata
for c in range(0x10000):
 x=unichr(c)
 try:
    b = unicodedata.numeric(x)
    #print "numeric:", repr(x)
    try:
      a = unicodedata.digit(x)
      if a != b:
       print "bad" , repr(x)
    except:
      print "Numeric but not digit", hex(c), x.encode("utf8"), "numeric ->",
b
 except:
  pass

   Finds about 130 characters. The only ones I feel are worth worrying about
are the half, quarters and eighths (0xbc, 0xbd, 0xbe, 0x215b, 0x215c,
0x215d, 0x215e) which are commonly used for expressing the prices of stocks
and commodities in the US. This may be rarely used but it is better to have
it available than to have people coding up their own translation tables.

   The 0x302* 'Hangzhou' numerals look like they should be classified as
digits.

   Neil





From tim_one at email.msn.com  Fri Sep 29 05:27:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:27:55 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <39D40352.5C511629@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>

[Tim]
> That is, .pyc's don't work at all anymore on Windows:  Python *always*
> thinks they have a bad magic number.  Elsewhere?

[Mark Favas]
> Just grabbed the latest from CVS - .pyc is still fine on Tru64 Unix...

Good clue!  Looks like Guido broke this on Windows when adding some
"exclusive write" silliness <wink> for Unixoids.  I'll try to make time
tonight to understand it (*looks* like fdopen is too late to ask for binary
mode under Windows ...).





From tim_one at email.msn.com  Fri Sep 29 05:40:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:40:49 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>

Any Unix geek awake?  import.c has this, starting at line 640:

#if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
...
	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);

I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
the question is whether it will break Unices if it's there ...





From esr at thyrsus.com  Fri Sep 29 05:59:12 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Thu, 28 Sep 2000 23:59:12 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Sep 28, 2000 at 11:40:49PM -0400
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com> <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <20000928235912.A9339@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> Any Unix geek awake?  import.c has this, starting at line 640:
> 
> #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
> ...
> 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
> 
> I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
> O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
> the question is whether it will break Unices if it's there ...

It will.  In particular, there us no such flag on Linux.  However
the workaround is trivial:

1. Make your flagargument O_EXCL|O_CREAT|O_WRONLY|O_TRUNC|O_BINARY

2. Above it somewhere, write

#ifndef O_BINARY
#define O_BINARY	0
#endif

Quite painless.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Society in every state is a blessing, but government even in its best
state is but a necessary evil; in its worst state an intolerable one;
for when we suffer, or are exposed to the same miseries *by a
government*, which we might expect in a country *without government*,
our calamities is heightened by reflecting that we furnish the means
by which we suffer."
	-- Thomas Paine



From tim_one at email.msn.com  Fri Sep 29 05:47:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:47:55 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMMHIAA.tim_one@email.msn.com>

Nevermind.  Fixed it in a way that will be safe everywhere.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Tim Peters
> Sent: Thursday, September 28, 2000 11:41 PM
> To: Mark Favas; python-dev at python.org
> Subject: RE: [Python-Dev] .pyc broken on Windows -- anywhere else?
>
>
> Any Unix geek awake?  import.c has this, starting at line 640:
>
> #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
> ...
> 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
>
> I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
> O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
> the question is whether it will break Unices if it's there ...





From fdrake at beopen.com  Fri Sep 29 05:48:49 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 23:48:49 -0400 (EDT)
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
	<LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <14804.4385.22560.522921@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Any Unix geek awake?  import.c has this, starting at line 640:

  Probably quite a few!

 > #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
 > ...
 > 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
 > 
 > I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
 > O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
 > the question is whether it will break Unices if it's there ...

  I think it varies substantially.  I just checked on a FreeBSD
machine in /use/include/*.h and /usr/include/*/*.h, and grep said it
wasn't there.  It is defined on my Linux box, however.
  Since O_BINARY is a no-op for Unix, you can do this:

#if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
#ifndef O_BINARY
#define O_BINARY (0)
#endif
...
	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Sep 29 05:51:44 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 23:51:44 -0400 (EDT)
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <20000928235912.A9339@thyrsus.com>
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
	<LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
	<20000928235912.A9339@thyrsus.com>
Message-ID: <14804.4560.644795.806373@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > It will.  In particular, there us no such flag on Linux.  However
 > the workaround is trivial:

  Ah, looking back at my grep output, I see that it's defined by a lot
of libraries, but not the standard headers.  It *is* defined by the
Apache API headers, kpathsea, MySQL, OpenSSL, and Qt.  And that's just
from what I have installed.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Fri Sep 29 08:06:33 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 29 Sep 2000 02:06:33 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
Message-ID: <14804.12649.504962.985774@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> I was playing with a different SourceForge project and I
    TM> screwed up my CVSROOT (used Python's instead). Sorry SOrry!

    TM> How do I undo this cleanly? I could 'cvs remove' the
    TM> README.txt file but that would still leave the top-level
    TM> 'black/' turd right? Do the SourceForge admin guys have to
    TM> manually kill the 'black' directory in the repository?

One a directory's been added, it's nearly impossible to cleanly delete
it from CVS.  If it's infected people's working directories, you're
really screwed, because even if the SF admins remove it from the
repository, it'll be a pain to clean up on the client side.

Probably best thing to do is make sure you "cvs rm" everything in the
directory and then just let "cvs up -P" remove the empty directory.
Everybody /is/ using -P (and -d) right? :)

-Barry



From effbot at telia.com  Fri Sep 29 09:01:37 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 09:01:37 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <007301c029e3$612e1960$766940d5@hagrid>

tim wrote:
> > But really, I don't suspect that anyone is going to do serious
> > character to number conversion on these esoteric characters. Plain
> > old digits will do just as they always have ...
> 
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

the unicode database has three fields dealing with the numeric
value: decimal digit value (integer), digit value (integer), and
numeric value (integer *or* rational):

    "This is a numeric field. If the character has the numeric
    property, as specified in Chapter 4 of the Unicode Standard,
    the value of that character is represented with an integer or
    rational number in this field."

here's today's proposal: let's claim that it's a bug to return a float
from "numeric", and change it to return a string instead.

(this will match "decomposition", which is also "broken" -- it really
should return a tag followed by a sequence of unicode characters).

</F>




From martin at loewis.home.cs.tu-berlin.de  Fri Sep 29 09:01:19 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 29 Sep 2000 09:01:19 +0200
Subject: [Python-Dev] Python-Dev] Patch to avoid conflict with older versions of Python.
Message-ID: <200009290701.JAA01119@loewis.home.cs.tu-berlin.de>

> but do we want this sort of code in Python?

Since I proposed a more primitive approach to solve the same problem
(which you had postponed), I'm obviously in favour of that patch.

> Is this sort of hack, however clever, going to some back and bite us?

I can't see why. The code is quite defensive: If the data structures
don't look like what it expects, it gives up and claims it can't find
the version of the python dll used by this module.

So in worst case, we get what we have now.

My only concern is that it assumes the HMODULE is an address which can
be dereferenced. If there was some MS documentation stating that this
is guaranteed in Win32, it'd be fine. If it is merely established fact
that all Win32 current implementations implement HMODULE that way, I'd
rather see a __try/__except around that - but that would only add to
the defensive style of this patch.

A hack is required since earlier versions of Python did not consider
this problem. I don't know whether python20.dll will behave reasonably
when loaded into Python 2.1 next year - was there anything done to
address the "uninitialized interpreter" problem?

> if people like it, is this feature something we can squeeze in for
> 2.0?

I think this patch will have most value if applied to 2.0. When 2.1
comes along, many people will have been bitten by this bug, and will
know to avoid it - so it won't do that much good in 2.1.

I'm not looking forward to answering all the help at python.org messages
to explain why Python can't deal with versions properly, so I'd rather
see these people get a nice exception instead of IDLE silently closing
all windows [including those with two hours of unsaved work].

Regards,
Martin

P.S db3l is David Bolen, see http://sourceforge.net/users/db3l.



From tim_one at email.msn.com  Fri Sep 29 09:32:09 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 29 Sep 2000 03:32:09 -0400
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENFHIAA.tim_one@email.msn.com>

[Mark Hammond]
> 	I'd like some feedback on a patch assigned to me.

It's assigned to you only because I'm on vacation now <wink>.

> It is designed to prevent Python extensions built for an earlier
> version of Python from crashing the new version.
>
> I haven't actually tested the patch, but I am sure it works as
> advertised (who is db31 anyway?).

It's sure odd that SF doesn't know!  It's David Bolen; see

http://www.python.org/pipermail/python-list/2000-September/119081.html

> My question relates more to the "style" - the patch locates the new
> .pyd's address in memory, and parses through the MS PE/COFF format,
> locating the import table.  If then scans the import table looking
> for Pythonxx.dll, and compares any found entries with the current
> version.
>
> Quite clever - a definite plus is that is should work for all old and
> future versions (of Python - dunno about Windows ;-) - but do we want
> this sort of code in Python?  Is this sort of hack, however clever,
> going to some back and bite us?

Guido will hate it:  his general rule is that he doesn't want code he
couldn't personally repair if needed, and this code is from Pluto (I hear
that's right next to Redmond, though, so let's not overreact either <wink>).

OTOH, Python goes to extreme lengths to prevent crashes, and my reading of
early c.l.py reports is that the 2.0 DLL incompatibility is going to cause a
lot of crashes out in the field.  People generally don't know squat about
the extension modules they're using -- or sometimes even that they *are*
using some.

> Second related question:  if people like it, is this feature something we
> can squeeze in for 2.0?

Well, it's useless if we don't.  That is, we should bite the bullet and come
up with a principled solution, even if that means extension writers have to
add a few new lines of code or be shunned from the community forever.  But
that won't happen for 2.0.

> If there are no objections to any of this, I am happy to test it and
> check it in - but am not confident of doing so without some feedback.

Guido's out of touch, but I'm on vacation, so he can't yell at me for
encouraging you on my own time.  If it works, I would check it in with the
understanding that we earnestly intend to do whatever it takes to get rid of
this code after 2.0.    It is not a long-term solution, but if it works it's
a very expedient hack.  Hacks suck for us, but letting Python blow up sucks
for users.  So long as I'm on vacation, I side with the users <0.9 wink>.

then-let's-ask-david-to-figure-out-how-to-disable-norton-antivirus-ly
    y'rs  - tim





From thomas.heller at ion-tof.com  Fri Sep 29 09:36:33 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 09:36:33 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <007d01c029e8$00b33570$4500a8c0@thomasnb>

> Hi all,
> I'd like some feedback on a patch assigned to me.  It is designed to
> prevent Python extensions built for an earlier version of Python from
> crashing the new version.
>
> I haven't actually tested the patch, but I am sure it works as advertised
> (who is db31 anyway?).
>
> My question relates more to the "style" - the patch locates the new .pyd's
> address in memory, and parses through the MS PE/COFF format, locating the
> import table.  If then scans the import table looking for Pythonxx.dll,
and
> compares any found entries with the current version.
Shouldn't the win32 api BindImageEx be used? Then you would not have
to know about the PE/COFF format at all. You can install a callback
function which will be called with the dll-names bound.
According to my docs, BindImageEx may not be included in early versions of
Win95, but who is using that anyway?
(Well, ok, what about CE?)

>
> Quite clever - a definite plus is that is should work for all old and
> future versions (of Python - dunno about Windows ;-) - but do we want this
> sort of code in Python?  Is this sort of hack, however clever, going to
> some back and bite us?
>
> Second related question:  if people like it, is this feature something we
> can squeeze in for 2.0?
+1 from me (if I count).

>
> If there are no objections to any of this, I am happy to test it and check
> it in - but am not confident of doing so without some feedback.
>
> Thanks,
>
> Mark.

Thomas




From effbot at telia.com  Fri Sep 29 09:53:57 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 09:53:57 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com> <007d01c029e8$00b33570$4500a8c0@thomasnb>
Message-ID: <012401c029ea$6cfbc7e0$766940d5@hagrid>

> According to my docs, BindImageEx may not be included in early versions of
> Win95, but who is using that anyway?

lots of people -- the first version of our PythonWare
installer didn't run on the original Win95 release, and
we still get complaints about that.

on the other hand, it's not that hard to use BindImageEx
only if it exists...

</F>




From mal at lemburg.com  Fri Sep 29 09:54:16 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 09:54:16 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <39D44AA8.926DCF04@lemburg.com>

Tim Peters wrote:
> 
> [Tim]
> > >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> > >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> > >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> > UnicodeError: Unicode-Escape decoding error: Invalid Unicode
>                 Character Name
> 
> [MAL]
> > Perhaps you should submit these for Unicode 4.0 ;-)
> 
> Note that the first two are already there; they just don't have an
> associated numerical value.  The last one was a hint that I was trying to
> write a frivolous msg while giving my "<wink>" key a break <wink>.

That's what I meant: you should submit the numeric values for
the first two and opt for addition of the last.
 
> > But really, I don't suspect that anyone is going to do serious
> > character to number conversion on these esoteric characters. Plain
> > old digits will do just as they always have ...
> 
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

It is needed for Unicode 3.0 standard compliance and for whoever
wants to use this data. Since the Unicode database explicitly
contains fractions, I think adding the .rational() API would
make sense to provide a different access method to this data.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 10:01:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:01:57 +0200
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
References: <LNBBLJKPBEHFEDALKOLCEEMHHIAA.tim_one@email.msn.com>
Message-ID: <39D44C75.110D83B6@lemburg.com>

Tim Peters wrote:
> 
> This is from 2.0b2 Windows, and typical:
> 
> C:\Python20>python -v
> # C:\PYTHON20\lib\site.pyc has bad magic
> import site # from C:\PYTHON20\lib\site.py
> # wrote C:\PYTHON20\lib\site.pyc
> # C:\PYTHON20\lib\os.pyc has bad magic
> import os # from C:\PYTHON20\lib\os.py
> # wrote C:\PYTHON20\lib\os.pyc
> import nt # builtin
> # C:\PYTHON20\lib\ntpath.pyc has bad magic
> import ntpath # from C:\PYTHON20\lib\ntpath.py
> # wrote C:\PYTHON20\lib\ntpath.pyc
> # C:\PYTHON20\lib\stat.pyc has bad magic
> import stat # from C:\PYTHON20\lib\stat.py
> # wrote C:\PYTHON20\lib\stat.pyc
> # C:\PYTHON20\lib\string.pyc has bad magic
> import string # from C:\PYTHON20\lib\string.py
> # wrote C:\PYTHON20\lib\string.pyc
> import strop # builtin
> # C:\PYTHON20\lib\UserDict.pyc has bad magic
> import UserDict # from C:\PYTHON20\lib\UserDict.py
> # wrote C:\PYTHON20\lib\UserDict.pyc
> Python 2.0b2 (#6, Sep 26 2000, 14:59:21) [MSC 32 bit (Intel)] on win32
> Type "copyright", "credits" or "license" for more information.
> >>>
> 
> That is, .pyc's don't work at all anymore on Windows:  Python *always*
> thinks they have a bad magic number.  Elsewhere?

FYI, it works just fine on Linux on i586.

--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 10:13:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:13:34 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com> <007301c029e3$612e1960$766940d5@hagrid>
Message-ID: <39D44F2E.14701980@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > But really, I don't suspect that anyone is going to do serious
> > > character to number conversion on these esoteric characters. Plain
> > > old digits will do just as they always have ...
> >
> > Which is why I have to wonder whether there's *any* value in exposing the
> > numeric-value property beyond regular old digits.
> 
> the unicode database has three fields dealing with the numeric
> value: decimal digit value (integer), digit value (integer), and
> numeric value (integer *or* rational):
> 
>     "This is a numeric field. If the character has the numeric
>     property, as specified in Chapter 4 of the Unicode Standard,
>     the value of that character is represented with an integer or
>     rational number in this field."
> 
> here's today's proposal: let's claim that it's a bug to return a float
> from "numeric", and change it to return a string instead.

Hmm, how about making the return format an option ?

unicodedata.numeric(char, format=('float' (default), 'string', 'fraction'))
 
> (this will match "decomposition", which is also "broken" -- it really
> should return a tag followed by a sequence of unicode characters).

Same here:

unicodedata.decomposition(char, format=('string' (default), 
                                        'tuple'))

I'd opt for making the API more customizable rather than trying
to find the one and only true return format ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas.heller at ion-tof.com  Fri Sep 29 10:48:51 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 10:48:51 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com> <007d01c029e8$00b33570$4500a8c0@thomasnb> <012401c029ea$6cfbc7e0$766940d5@hagrid>
Message-ID: <001601c029f2$1aa72540$4500a8c0@thomasnb>

> > According to my docs, BindImageEx may not be included in early versions
of
> > Win95, but who is using that anyway?
>
> lots of people -- the first version of our PythonWare
> installer didn't run on the original Win95 release, and
> we still get complaints about that.
>

Requirements
  Windows NT/2000: Requires Windows NT 4.0 or later.
  Windows 95/98: Requires Windows 95 or later. Available as a
redistributable for Windows 95.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  Header: Declared in Imagehlp.h.
  Library: Use Imagehlp.lib.

> on the other hand, it's not that hard to use BindImageEx
> only if it exists...
>

Thomas




From tim_one at email.msn.com  Fri Sep 29 11:02:38 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 29 Sep 2000 05:02:38 -0400
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <012401c029ea$6cfbc7e0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENKHIAA.tim_one@email.msn.com>

[Thomas Heller]
> According to my docs, BindImageEx may not be included in early
> versions of Win95, but who is using that anyway?

[/F]
> lots of people -- the first version of our PythonWare
> installer didn't run on the original Win95 release, and
> we still get complaints about that.

Indeed, you got one from me <wink>!

> on the other hand, it's not that hard to use BindImageEx
> only if it exists...

I'm *really* going on vacation now, but if BindImageEx makes sense here
(offhand I confess the intended use of it here didn't click for me), MS's
imagehlp.dll is redistributable -- although it appears they split it into
two DLLs for Win2K and made only "the other one" redistributable there
<arghghghgh> ...





From thomas.heller at ion-tof.com  Fri Sep 29 11:15:27 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 11:15:27 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <LNBBLJKPBEHFEDALKOLCCENKHIAA.tim_one@email.msn.com>
Message-ID: <002e01c029f5$d24dbc10$4500a8c0@thomasnb>

> I'm *really* going on vacation now, but if BindImageEx makes sense here
> (offhand I confess the intended use of it here didn't click for me), MS's
> imagehlp.dll is redistributable -- although it appears they split it into
> two DLLs for Win2K and made only "the other one" redistributable there
> <arghghghgh> ...

No need to install it on Win2K (may not even be possible?),
only for Win95.

I just checked: imagehlp.dll is NOT included in Win95b (which I still
use on one computer, but I thought I was in a small minority)

Thomas




From jeremy at beopen.com  Fri Sep 29 16:09:16 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 29 Sep 2000 10:09:16 -0400 (EDT)
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
In-Reply-To: <045201c029c9$8f49fd10$8119fea9@neil>
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
	<045201c029c9$8f49fd10$8119fea9@neil>
Message-ID: <14804.41612.747364.118819@bitdiddle.concentric.net>

>>>>> "NH" == Neil Hodgson <nhodgson at bigpond.net.au> writes:

  NH>    Finds about 130 characters. The only ones I feel are worth
  NH>    worrying about
  NH> are the half, quarters and eighths (0xbc, 0xbd, 0xbe, 0x215b,
  NH> 0x215c, 0x215d, 0x215e) which are commonly used for expressing
  NH> the prices of stocks and commodities in the US. This may be
  NH> rarely used but it is better to have it available than to have
  NH> people coding up their own translation tables.

The US no longer uses fraction to report stock prices.  Example:
    http://business.nytimes.com/market_summary.asp

LEADERS                            Last      Range         Change    
AMERICAN INDL PPTYS REIT  (IND)   14.06  13.56  - 14.06  0.25  / 1.81% 
R G S ENERGY GROUP INC  (RGS)     28.19  27.50  - 28.19  0.50  / 1.81% 
DRESDNER RCM GLBL STRT INC  (DSF)  6.63   6.63  - 6.63   0.06  / 0.95% 
FALCON PRODS INC  (FCP)            9.63   9.63  - 9.88   0.06  / 0.65% 
GENERAL ELEC CO  (GE)             59.00  58.63  - 59.75  0.19  / 0.32% 

Jeremy



From trentm at ActiveState.com  Fri Sep 29 16:56:34 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 29 Sep 2000 07:56:34 -0700
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Sep 28, 2000 at 10:25:00PM -0400
References: <200009270706.AAA21107@slayer.i.sourceforge.net> <20000927003233.C19872@ActiveState.com> <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>
Message-ID: <20000929075634.B15762@ActiveState.com>

On Thu, Sep 28, 2000 at 10:25:00PM -0400, Fred L. Drake, Jr. wrote:
> 
> Trent Mick writes:
>  > I was playing with a different SourceForge project and I screwed up my
>  > CVSROOT (used Python's instead). Sorry SOrry!
> 
>   Well, you blew it.  Don't worry, we'll have you kicked off
> SourceForge in no time!  ;)
>   Well, maybe not.  I've submitted a support request to fix this:
> 
> http://sourceforge.net/support/?func=detailsupport&support_id=106112&group_id=1
> 
> 

Thank you Fred!


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Fri Sep 29 17:00:17 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 29 Sep 2000 08:00:17 -0700
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14804.12649.504962.985774@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Sep 29, 2000 at 02:06:33AM -0400
References: <200009270706.AAA21107@slayer.i.sourceforge.net> <20000927003233.C19872@ActiveState.com> <14804.12649.504962.985774@anthem.concentric.net>
Message-ID: <20000929080017.C15762@ActiveState.com>

On Fri, Sep 29, 2000 at 02:06:33AM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
> 
>     TM> I was playing with a different SourceForge project and I
>     TM> screwed up my CVSROOT (used Python's instead). Sorry SOrry!
> 
>     TM> How do I undo this cleanly? I could 'cvs remove' the
>     TM> README.txt file but that would still leave the top-level
>     TM> 'black/' turd right? Do the SourceForge admin guys have to
>     TM> manually kill the 'black' directory in the repository?
> 
> One a directory's been added, it's nearly impossible to cleanly delete
> it from CVS.  If it's infected people's working directories, you're
> really screwed, because even if the SF admins remove it from the
> repository, it'll be a pain to clean up on the client side.

Hopefully no client machines were infected. People would have to 'cvs co
black' with the Python CVSROOT. I presume people are only doing either 'cvs
co python'or 'cvs co distutils'. ...or is there some sort of 'cvs co *' type
invocation that people could and were using?



> 
> Probably best thing to do is make sure you "cvs rm" everything in the
> directory and then just let "cvs up -P" remove the empty directory.
> Everybody /is/ using -P (and -d) right? :)
>

I didn't know about -P, but I will use it now. For reference for others:

       -P     Prune  (remove)  directories that are empty after being
              updated, on checkout, or update.  Normally, an empty directory
              (one that is void  of revision-con? trolled  files) is left
              alone.  Specifying -P will cause these directories to be
              silently removed from your checked-out sources.  This does not
              remove  the directory  from  the  repository, only from your
              checked out copy.  Note that this option is implied by the -r
              or -D options of checkout and export.


Trent


-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Fri Sep 29 17:12:29 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 29 Sep 2000 11:12:29 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
	<14804.12649.504962.985774@anthem.concentric.net>
	<20000929080017.C15762@ActiveState.com>
Message-ID: <14804.45405.528913.613816@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Hopefully no client machines were infected. People would have
    TM> to 'cvs co black' with the Python CVSROOT. I presume people
    TM> are only doing either 'cvs co python'or 'cvs co
    TM> distutils'. ...or is there some sort of 'cvs co *' type
    TM> invocation that people could and were using?

In fact, I usually only "co -d python python/dist/src" :)  But if you
do a "cvs up -d" at the top-level, I think you'll get the new
directory.  Don't know how many people that'll affect, but if you're
going to wax that the directory, the soon the better!

-Barry



From fdrake at beopen.com  Fri Sep 29 17:21:48 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 29 Sep 2000 11:21:48 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14804.12649.504962.985774@anthem.concentric.net>
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
	<14804.12649.504962.985774@anthem.concentric.net>
Message-ID: <14804.45964.428895.57625@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > One a directory's been added, it's nearly impossible to cleanly delete
 > it from CVS.  If it's infected people's working directories, you're
 > really screwed, because even if the SF admins remove it from the
 > repository, it'll be a pain to clean up on the client side.

  In general, yes, but since the directory was a separate module (in
CVS terms, "product" in SF terms), there's no way for it to have been
picked up by clients automatically.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Sep 29 18:15:09 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 29 Sep 2000 12:15:09 -0400 (EDT)
Subject: [Python-Dev] codecs question
Message-ID: <14804.49165.894978.144346@cj42289-a.reston1.va.home.com>

  Jeremy was just playing with the xml.sax package, and decided to
print the string returned from parsing "&#251;" (the copyright
symbol).  Sure enough, he got a traceback:

>>> print u'\251'

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
UnicodeError: ASCII encoding error: ordinal not in range(128)

and asked me about it.  I was a little surprised myself.  First, that
anyone would use "print" in a SAX handler to start with, and second,
that it was so painful.
  Now, I can chalk this up to not using a reasonable stdout that
understands that Unicode needs to be translated to Latin-1 given my
font selection.  So I looked at the codecs module to provide a usable
output stream.  The EncodedFile class provides a nice wrapper around
another file object, and supports both encoding both ways.
  Unfortunately, I can't see what "encoding" I should use if I want to
read & write Unicode string objects to it.  ;(  (Marc-Andre, please
tell me I've missed something!)  I also don't think I
can use it with "print", extended or otherwise.
  The PRINT_ITEM opcode calls PyFile_WriteObject() with whatever it
gets, so that's fine.  Then it converts the object using
PyObject_Str() or PyObject_Repr().  For Unicode objects, the tp_str
handler attempts conversion to the default encoding ("ascii" in this
case), and raises the traceback we see above.
  Perhaps a little extra work is needed in PyFile_WriteObject() to
allow Unicode objects to pass through if the file is merely file-like,
and let the next layer handle the conversion?  This would probably
break code, and therefore not be acceptable.
  On the other hand, it's annoying that I can't create a file-object
that takes Unicode strings from "print", and doesn't seem intuitive.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From loewis at informatik.hu-berlin.de  Fri Sep 29 19:16:25 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Fri, 29 Sep 2000 19:16:25 +0200 (MET DST)
Subject: [Python-Dev] codecs question 
Message-ID: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>

>   Unfortunately, I can't see what "encoding" I should use if I want
>   to read & write Unicode string objects to it.  ;( (Marc-Andre,
>   please tell me I've missed something!)

It depends on the output you want to have. One option would be

s=codecs.lookup('unicode-escape')[3](sys.stdout)

Then, s.write(u'\251') prints a string in Python quoting notation.

Unfortunately,

print >>s,u'\251'

won't work, since print *first* tries to convert the argument to a
string, and then prints the string onto the stream.

>  On the other hand, it's annoying that I can't create a file-object
> that takes Unicode strings from "print", and doesn't seem intuitive.

Since you are asking for a hack :-) How about having an additional
letter of 'u' in the "mode" attribute of a file object?

Then, print would be

def print(stream,string):
  if type(string) == UnicodeType:
    if 'u' in stream.mode:
      stream.write(string)
      return
  stream.write(str(string))

The Stream readers and writers would then need to have a mode or 'ru'
or 'wu', respectively.

Any other protocol to signal unicode-awareness in a stream might do as
well.

Regards,
Martin

P.S. Is there some function to retrieve the UCN names from ucnhash.c?



From mal at lemburg.com  Fri Sep 29 20:08:26 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 20:08:26 +0200
Subject: [Python-Dev] codecs question
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>
Message-ID: <39D4DA99.53338FA5@lemburg.com>

Martin von Loewis wrote:
> 
> P.S. Is there some function to retrieve the UCN names from ucnhash.c?

No, there's not even a way to extract those names... a table is
there (_Py_UnicodeCharacterName in ucnhash.c), but no access
function.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 20:09:13 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 20:09:13 +0200
Subject: [Python-Dev] codecs question
References: <14804.49165.894978.144346@cj42289-a.reston1.va.home.com>
Message-ID: <39D4DAC9.7F8E1CE5@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   Jeremy was just playing with the xml.sax package, and decided to
> print the string returned from parsing "&#251;" (the copyright
> symbol).  Sure enough, he got a traceback:
> 
> >>> print u'\251'
> 
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> UnicodeError: ASCII encoding error: ordinal not in range(128)
> 
> and asked me about it.  I was a little surprised myself.  First, that
> anyone would use "print" in a SAX handler to start with, and second,
> that it was so painful.

That's a consequence of defaulting to ASCII for all platforms
instead of choosing the encoding depending on the current locale
(the site.py file has code which does the latter).

>   Now, I can chalk this up to not using a reasonable stdout that
> understands that Unicode needs to be translated to Latin-1 given my
> font selection.  So I looked at the codecs module to provide a usable
> output stream.  The EncodedFile class provides a nice wrapper around
> another file object, and supports both encoding both ways.
>   Unfortunately, I can't see what "encoding" I should use if I want to
> read & write Unicode string objects to it.  ;(  (Marc-Andre, please
> tell me I've missed something!) 

That depends on what you want to see as output ;-) E.g. in
Europe you'd use Latin-1 (which also contains the copyright
symbol).

> I also don't think I
> can use it with "print", extended or otherwise.
>   The PRINT_ITEM opcode calls PyFile_WriteObject() with whatever it
> gets, so that's fine.  Then it converts the object using
> PyObject_Str() or PyObject_Repr().  For Unicode objects, the tp_str
> handler attempts conversion to the default encoding ("ascii" in this
> case), and raises the traceback we see above.

Right.

>   Perhaps a little extra work is needed in PyFile_WriteObject() to
> allow Unicode objects to pass through if the file is merely file-like,
> and let the next layer handle the conversion?  This would probably
> break code, and therefore not be acceptable.
>   On the other hand, it's annoying that I can't create a file-object
> that takes Unicode strings from "print", and doesn't seem intuitive.

The problem is that the .write() method of a file-like object
will most probably only work with string objects. If
it uses "s#" or "t#" it's lucky, because then the argument
parser will apply the necessariy magic to the input object
to get out some object ready for writing to the file. Otherwise
it will simply fail with a type error.

Simply allowing PyObject_Str() to return Unicode objects too
is not an alternative either since that would certainly break
tons of code.

Implementing tp_print for Unicode wouldn't get us anything
either.

Perhaps we'll need to fix PyFile_WriteObject() to special
case Unicode and allow calling .write() with an Unicode
object and fix those .write() methods which don't do the
right thing ?!

This is a project for 2.1. In 2.0 only explicitly calling
the .write() method will do the trick and EncodedFile()
helps with this.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From effbot at telia.com  Fri Sep 29 20:28:38 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 20:28:38 +0200
Subject: [Python-Dev] codecs question 
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>
Message-ID: <000001c02a47$f3f5f100$766940d5@hagrid>

> P.S. Is there some function to retrieve the UCN names from ucnhash.c?

the "unicodenames" patch (which replaces ucnhash) includes this
functionality -- but with a little distance, I think it's better to add
it to the unicodedata module.

(it's included in the step 4 patch, soon to be posted to a patch
manager near you...)

</F>




From loewis at informatik.hu-berlin.de  Sat Sep 30 11:47:01 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 30 Sep 2000 11:47:01 +0200 (MET DST)
Subject: [Python-Dev] codecs question
In-Reply-To: <000001c02a47$f3f5f100$766940d5@hagrid> (effbot@telia.com)
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de> <000001c02a47$f3f5f100$766940d5@hagrid>
Message-ID: <200009300947.LAA13652@pandora.informatik.hu-berlin.de>

> the "unicodenames" patch (which replaces ucnhash) includes this
> functionality -- but with a little distance, I think it's better to add
> it to the unicodedata module.
> 
> (it's included in the step 4 patch, soon to be posted to a patch
> manager near you...)

Sounds good. Is there any chance to use this in codecs, then?
I'm thinking of

>>> print u"\N{COPYRIGHT SIGN}".encode("ascii-ucn")
\N{COPYRIGHT SIGN}
>>> print u"\N{COPYRIGHT SIGN}".encode("latin-1-ucn")
?

Regards,
Martin

P.S. Some people will recognize this as the disguised question 'how
can I convert non-convertable characters using the XML entity
notation?'



From mal at lemburg.com  Sat Sep 30 12:21:43 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 30 Sep 2000 12:21:43 +0200
Subject: [Python-Dev] codecs question
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de> <000001c02a47$f3f5f100$766940d5@hagrid> <200009300947.LAA13652@pandora.informatik.hu-berlin.de>
Message-ID: <39D5BEB7.F4045E8B@lemburg.com>

Martin von Loewis wrote:
> 
> > the "unicodenames" patch (which replaces ucnhash) includes this
> > functionality -- but with a little distance, I think it's better to add
> > it to the unicodedata module.
> >
> > (it's included in the step 4 patch, soon to be posted to a patch
> > manager near you...)
> 
> Sounds good. Is there any chance to use this in codecs, then?

If you need speed, you'd have to write a C codec for this
and yes: the ucnhash module does import a C API using a
PyCObject which you can use to access the static C data
table.

Don't know if Fredrik's version will also support this.

I think a C function as access method would be more generic
than the current direct C table access.

> I'm thinking of
> 
> >>> print u"\N{COPYRIGHT SIGN}".encode("ascii-ucn")
> \N{COPYRIGHT SIGN}
> >>> print u"\N{COPYRIGHT SIGN}".encode("latin-1-ucn")
> ?
> 
> Regards,
> Martin
> 
> P.S. Some people will recognize this as the disguised question 'how
> can I convert non-convertable characters using the XML entity
> notation?'

If you just need a single encoding, e.g. Latin-1, simply clone
the codec (it's coded in unicodeobject.c) and add the XML entity
processing.

Unfortunately, reusing the existing codecs is not too
efficient: the reason is that there is no error handling
which would permit you to say "encode as far as you can
and then return the encoded data plus a position marker
in the input stream/data".

Perhaps we should add a new standard error handling
scheme "break" which simply stops encoding/decoding
whenever an error occurrs ?!

This should then allow reusing existing codecs by
processing the input in slices.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 10:15:18 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:15:18 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com> <045201c029c9$8f49fd10$8119fea9@neil>
Message-ID: <39D44F96.D4342ADB@lemburg.com>

Neil Hodgson wrote:
> 
>    The 0x302* 'Hangzhou' numerals look like they should be classified as
> digits.

Can't change the Unicode 3.0 database... so even though this might
be useful in some contexts lets stick to the standard.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/




From guido at python.org  Sat Sep 30 22:56:18 2000
From: guido at python.org (Guido van Rossum)
Date: Sat, 30 Sep 2000 15:56:18 -0500
Subject: [Python-Dev] Changes in semantics to str()?
Message-ID: <200009302056.PAA14718@cj20424-a.reston1.va.home.com>

When we changed floats to behave different on repr() than on str(), we
briefly discussed changes to the container objects as well, but
nothing came of it.

Currently, str() of a tuple, list or dictionary is the same as repr()
of those objects.  This is not very consistent.  For example, when we
have a float like 1.1 which can't be represented exactly, str() yields
"1.1" but repr() yields "1.1000000000000001".  But if we place the
same number in a list, it doesn't matter which function we use: we
always get "[1.1000000000000001]".

Below I have included changes to listobject.c, tupleobject.c and
dictobject.c that fix this.  The fixes change the print and str()
callbacks for these objects to use PyObject_Str() on the contained
items -- except if the item is a string or Unicode string.  I made
these exceptions because I don't like the idea of str(["abc"])
yielding [abc] -- I'm too used to the idea of seeing ['abc'] here.
And str() of a Unicode object fails when it contains non-ASCII
characters, so that's no good either -- it would break too much code.

Is it too late to check this in?  Another negative consequence would
be that for user-defined or 3rd party extension objects that have
different repr() and str(), like NumPy arrays, it might break some
code -- but I think this is not very likely.

--Guido van Rossum (home page: http://www.python.org/~guido/)

*** dictobject.c	2000/09/01 23:29:27	2.65
--- dictobject.c	2000/09/30 16:03:04
***************
*** 594,599 ****
--- 594,601 ----
  	register int i;
  	register int any;
  	register dictentry *ep;
+ 	PyObject *item;
+ 	int itemflags;
  
  	i = Py_ReprEnter((PyObject*)mp);
  	if (i != 0) {
***************
*** 609,620 ****
  		if (ep->me_value != NULL) {
  			if (any++ > 0)
  				fprintf(fp, ", ");
! 			if (PyObject_Print((PyObject *)ep->me_key, fp, 0)!=0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
  			fprintf(fp, ": ");
! 			if (PyObject_Print(ep->me_value, fp, 0) != 0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
--- 611,630 ----
  		if (ep->me_value != NULL) {
  			if (any++ > 0)
  				fprintf(fp, ", ");
! 			item = (PyObject *)ep->me_key;
! 			itemflags = flags;
! 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 				itemflags = 0;
! 			if (PyObject_Print(item, fp, itemflags)!=0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
  			fprintf(fp, ": ");
! 			item = ep->me_value;
! 			itemflags = flags;
! 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 				itemflags = 0;
! 			if (PyObject_Print(item, fp, itemflags) != 0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
***************
*** 661,666 ****
--- 671,722 ----
  	return v;
  }
  
+ static PyObject *
+ dict_str(dictobject *mp)
+ {
+ 	auto PyObject *v;
+ 	PyObject *sepa, *colon, *item, *repr;
+ 	register int i;
+ 	register int any;
+ 	register dictentry *ep;
+ 
+ 	i = Py_ReprEnter((PyObject*)mp);
+ 	if (i != 0) {
+ 		if (i > 0)
+ 			return PyString_FromString("{...}");
+ 		return NULL;
+ 	}
+ 
+ 	v = PyString_FromString("{");
+ 	sepa = PyString_FromString(", ");
+ 	colon = PyString_FromString(": ");
+ 	any = 0;
+ 	for (i = 0, ep = mp->ma_table; i < mp->ma_size && v; i++, ep++) {
+ 		if (ep->me_value != NULL) {
+ 			if (any++)
+ 				PyString_Concat(&v, sepa);
+ 			item = ep->me_key;
+ 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 				repr = PyObject_Repr(item);
+ 			else
+ 				repr = PyObject_Str(item);
+ 			PyString_ConcatAndDel(&v, repr);
+ 			PyString_Concat(&v, colon);
+ 			item = ep->me_value;
+ 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 				repr = PyObject_Repr(item);
+ 			else
+ 				repr = PyObject_Str(item);
+ 			PyString_ConcatAndDel(&v, repr);
+ 		}
+ 	}
+ 	PyString_ConcatAndDel(&v, PyString_FromString("}"));
+ 	Py_ReprLeave((PyObject*)mp);
+ 	Py_XDECREF(sepa);
+ 	Py_XDECREF(colon);
+ 	return v;
+ }
+ 
  static int
  dict_length(dictobject *mp)
  {
***************
*** 1193,1199 ****
  	&dict_as_mapping,	/*tp_as_mapping*/
  	0,		/* tp_hash */
  	0,		/* tp_call */
! 	0,		/* tp_str */
  	0,		/* tp_getattro */
  	0,		/* tp_setattro */
  	0,		/* tp_as_buffer */
--- 1249,1255 ----
  	&dict_as_mapping,	/*tp_as_mapping*/
  	0,		/* tp_hash */
  	0,		/* tp_call */
! 	(reprfunc)dict_str, /* tp_str */
  	0,		/* tp_getattro */
  	0,		/* tp_setattro */
  	0,		/* tp_as_buffer */
Index: listobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/listobject.c,v
retrieving revision 2.88
diff -c -r2.88 listobject.c
*** listobject.c	2000/09/26 05:46:01	2.88
--- listobject.c	2000/09/30 16:03:04
***************
*** 197,203 ****
  static int
  list_print(PyListObject *op, FILE *fp, int flags)
  {
! 	int i;
  
  	i = Py_ReprEnter((PyObject*)op);
  	if (i != 0) {
--- 197,204 ----
  static int
  list_print(PyListObject *op, FILE *fp, int flags)
  {
! 	int i, itemflags;
! 	PyObject *item;
  
  	i = Py_ReprEnter((PyObject*)op);
  	if (i != 0) {
***************
*** 210,216 ****
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		if (PyObject_Print(op->ob_item[i], fp, 0) != 0) {
  			Py_ReprLeave((PyObject *)op);
  			return -1;
  		}
--- 211,221 ----
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		item = op->ob_item[i];
! 		itemflags = flags;
! 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 			itemflags = 0;
! 		if (PyObject_Print(item, fp, itemflags) != 0) {
  			Py_ReprLeave((PyObject *)op);
  			return -1;
  		}
***************
*** 245,250 ****
--- 250,285 ----
  	return s;
  }
  
+ static PyObject *
+ list_str(PyListObject *v)
+ {
+ 	PyObject *s, *comma, *item, *repr;
+ 	int i;
+ 
+ 	i = Py_ReprEnter((PyObject*)v);
+ 	if (i != 0) {
+ 		if (i > 0)
+ 			return PyString_FromString("[...]");
+ 		return NULL;
+ 	}
+ 	s = PyString_FromString("[");
+ 	comma = PyString_FromString(", ");
+ 	for (i = 0; i < v->ob_size && s != NULL; i++) {
+ 		if (i > 0)
+ 			PyString_Concat(&s, comma);
+ 		item = v->ob_item[i];
+ 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 			repr = PyObject_Repr(item);
+ 		else
+ 			repr = PyObject_Str(item);
+ 		PyString_ConcatAndDel(&s, repr);
+ 	}
+ 	Py_XDECREF(comma);
+ 	PyString_ConcatAndDel(&s, PyString_FromString("]"));
+ 	Py_ReprLeave((PyObject *)v);
+ 	return s;
+ }
+ 
  static int
  list_compare(PyListObject *v, PyListObject *w)
  {
***************
*** 1484,1490 ****
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 1519,1525 ----
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)list_str, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
***************
*** 1561,1567 ****
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 1596,1602 ----
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)list_str, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
Index: tupleobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/tupleobject.c,v
retrieving revision 2.46
diff -c -r2.46 tupleobject.c
*** tupleobject.c	2000/09/15 07:32:39	2.46
--- tupleobject.c	2000/09/30 16:03:04
***************
*** 167,178 ****
  static int
  tupleprint(PyTupleObject *op, FILE *fp, int flags)
  {
! 	int i;
  	fprintf(fp, "(");
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		if (PyObject_Print(op->ob_item[i], fp, 0) != 0)
  			return -1;
  	}
  	if (op->ob_size == 1)
--- 167,183 ----
  static int
  tupleprint(PyTupleObject *op, FILE *fp, int flags)
  {
! 	int i, itemflags;
! 	PyObject *item;
  	fprintf(fp, "(");
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		item = op->ob_item[i];
! 		itemflags = flags;
! 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 			itemflags = 0;
! 		if (PyObject_Print(item, fp, itemflags) != 0)
  			return -1;
  	}
  	if (op->ob_size == 1)
***************
*** 200,205 ****
--- 205,234 ----
  	return s;
  }
  
+ static PyObject *
+ tuplestr(PyTupleObject *v)
+ {
+ 	PyObject *s, *comma, *item, *repr;
+ 	int i;
+ 	s = PyString_FromString("(");
+ 	comma = PyString_FromString(", ");
+ 	for (i = 0; i < v->ob_size && s != NULL; i++) {
+ 		if (i > 0)
+ 			PyString_Concat(&s, comma);
+ 		item = v->ob_item[i];
+ 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 			repr = PyObject_Repr(item);
+ 		else
+ 			repr = PyObject_Str(item);
+ 		PyString_ConcatAndDel(&s, repr);
+ 	}
+ 	Py_DECREF(comma);
+ 	if (v->ob_size == 1)
+ 		PyString_ConcatAndDel(&s, PyString_FromString(","));
+ 	PyString_ConcatAndDel(&s, PyString_FromString(")"));
+ 	return s;
+ }
+ 
  static int
  tuplecompare(register PyTupleObject *v, register PyTupleObject *w)
  {
***************
*** 412,418 ****
  	0,		/*tp_as_mapping*/
  	(hashfunc)tuplehash, /*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 441,447 ----
  	0,		/*tp_as_mapping*/
  	(hashfunc)tuplehash, /*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)tuplestr, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/



From fdrake at beopen.com  Fri Sep  1 00:01:41 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 18:01:41 -0400 (EDT)
Subject: [Python-Dev] Syntax error in Makefile for "make install"
In-Reply-To: <39AED489.F953E9EE@per.dem.csiro.au>
References: <39AED489.F953E9EE@per.dem.csiro.au>
Message-ID: <14766.54725.466043.196080@cj42289-a.reston1.va.home.com>

Mark Favas writes:
 > Makefile in the libainstall target of "make install" uses the following
 > construct:
 >                 @if [ "$(MACHDEP)" == "beos" ] ; then \
 > This "==" is illegal in all the /bin/sh's I have lying around, and leads
 > to make failing with:
 > /bin/sh: test: unknown operator ==
 > make: *** [libainstall] Error 1

  Fixed; thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From m.favas at per.dem.csiro.au  Fri Sep  1 00:29:47 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 06:29:47 +0800
Subject: [Python-Dev] Namespace collision between lib/xml and site-packages/xml
Message-ID: <39AEDC5B.333F737E@per.dem.csiro.au>

On July 26 I reported that the new xml package in the standard library
collides with and overrides the xml package from the xml-sig that may be
installed in site-packages. This is still the case. The new package does
not have the same functionality as the one in site-packages, and hence
my application (and others relying on similar functionality) gets an
import error. I understood that it was planned that the new library xml
package would check for the site-package version, and transparently hand
over to it if it existed. It's not really an option to remove/rename the
xml package in the std lib, or to break existing xml-based code...

Of course, this might be fixed by 2.0b1, or is it a feature that will be
frozen out <wry smile>?

Fred's response was:
"  I expect we'll be making the package in site-packages an extension
provider for the xml package in the standard library.  I'm planning to
discuss this issue at today's PythonLabs meeting." 
-- 
Mark



From ping at lfw.org  Fri Sep  1 01:16:55 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 18:16:55 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.50976.102853.695767@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Charles G Waldman wrote:
> Alas, even after fixing this, I *still* can't get linuxaudiodev to
> play the damned .au file.  It works fine for the .wav formats.
> 
> I'll continue hacking on this as time permits.

Just so you know -- i was definitely able to get this to work at
some point before when we were trying to fix this.  I changed
test_linuxaudiodev and it played the .AU file correctly.  I haven't
had time to survey what the state of the various modules is now,
though -- i'll have a look around and see what's going on.

Side note: is there a well-defined platform-independent sound
interface we should be conforming to?  It would be nice to have a
single Python function for each of the following things:

    1. Play a .wav file given its filename.

    2. Play a .au file given its filename.

    3. Play some raw audio data, given a string of bytes and a
       sampling rate.

which would work on as many platforms as possible with the same command.

A quick glance at audiodev.py shows that it seems to support only
Sun and SGI.  Should it be extended?

If someone's already in charge of this and knows what's up, let me know.
I'm sorry if this is common knowledge of which i was just unaware.



-- ?!ng




From effbot at telia.com  Fri Sep  1 00:47:03 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:47:03 +0200
Subject: [Python-Dev] threadmodule.c comment error? (from comp.lang.python)
Message-ID: <00d001c0139d$7be87900$766940d5@hagrid>

as noted by curtis jensen over at comp.lang.python:

the parse tuple string doesn't quite match the error message
given if the 2nd argument isn't a tuple.  on the other hand, the
args argument is initialized to NULL...

thread_PyThread_start_new_thread(PyObject *self, PyObject *fargs)
{
 PyObject *func, *args = NULL, *keyw = NULL;
 struct bootstate *boot;

 if (!PyArg_ParseTuple(fargs, "OO|O:start_new_thread", &func, &args, &keyw))
  return NULL;
 if (!PyCallable_Check(func)) {
  PyErr_SetString(PyExc_TypeError,
    "first arg must be callable");
  return NULL;
 }
 if (!PyTuple_Check(args)) {
  PyErr_SetString(PyExc_TypeError,
    "optional 2nd arg must be a tuple");
  return NULL;
 }
 if (keyw != NULL && !PyDict_Check(keyw)) {
  PyErr_SetString(PyExc_TypeError,
    "optional 3rd arg must be a dictionary");
  return NULL;
 }

what's the right way to fix this? (change the error message
and remove the initialization, or change the parsetuple string
and the tuple check)

</F>




From effbot at telia.com  Fri Sep  1 00:30:23 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:30:23 +0200
Subject: [Python-Dev] one last SRE headache
References: <LNBBLJKPBEHFEDALKOLCEEELHDAA.tim_one@email.msn.com>
Message-ID: <009301c0139b$0ea31000$766940d5@hagrid>

tim:

> [/F]
> > I had to add one rule:
> >
> >     If it starts with a zero, it's always an octal number.
> >     Up to two more octal digits are accepted after the
> >     leading zero.
> >
> > but this still fails on this pattern:
> >
> >     r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'
> >
> > where the last part is supposed to be a reference to
> > group 11, followed by a literal '9'.
> 
> But 9 isn't an octal digit, so it fits w/ your new rule just fine.

last time I checked, "1" wasn't a valid zero.

but nevermind; I think I've figured it out (see other mail)

</F>




From effbot at telia.com  Fri Sep  1 00:28:40 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:28:40 +0200
Subject: [Python-Dev] one last SRE headache
References: <LNBBLJKPBEHFEDALKOLCEEEIHDAA.tim_one@email.msn.com>
Message-ID: <008701c0139a$d1619ae0$766940d5@hagrid>

tim peters:
> The PRE documentation expresses the true intent:
> 
>     \number
>     Matches the contents of the group of the same number. Groups
>     are numbered starting from 1. For example, (.+) \1 matches 'the the'
>     or '55 55', but not 'the end' (note the space after the group). This
>     special sequence can only be used to match one of the first 99 groups.
>     If the first digit of number is 0, or number is 3 octal digits long,
>     it will not be interpreted as a group match, but as the character with
>     octal value number.

yeah, I've read that.  clear as coffee.

but looking at again, I suppose that the right way to
implement this is (doing the tests in the given order):

    if it starts with zero, it's an octal escape
    (1 or 2 octal digits may follow)

    if it starts with an octal digit, AND is followed
    by two other octal digits, it's an octal escape

    if it starts with any digit, it's a reference
    (1 extra decimal digit may follow)

oh well.  too bad my scanner only provides a one-character
lookahead...

</F>




From bwarsaw at beopen.com  Fri Sep  1 01:22:53 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 19:22:53 -0400 (EDT)
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AEBD4A.55ABED9E@per.dem.csiro.au>
	<39AE07FF.478F413@per.dem.csiro.au>
	<14766.14278.609327.610929@anthem.concentric.net>
	<39AEBD01.601F7A83@per.dem.csiro.au>
Message-ID: <14766.59597.713039.633184@anthem.concentric.net>

>>>>> "MF" == Mark Favas <m.favas at per.dem.csiro.au> writes:

    MF> Close, but no cigar - fixes the miscalculation of BE_MAGIC,
    MF> but "magic" is still read from the .mo file as
    MF> 0xffffffff950412de (the 64-bit rep of the 32-bit negative
    MF> integer 0x950412de)

Thanks to a quick chat with Tim, who is always quick to grasp the meat
of the issue, we realize we need to & 0xffffffff all the 32 bit
unsigned ints we're reading out of the .mo files.  I'll work out a
patch, and check it in after a test on 32-bit Linux.  Watch for it,
and please try it out on your box.

Thanks,
-Barry



From bwarsaw at beopen.com  Fri Sep  1 00:12:23 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 18:12:23 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules Makefile.pre.in,1.64,1.65
References: <200008312153.OAA03214@slayer.i.sourceforge.net>
Message-ID: <14766.55367.854732.727671@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake <fdrake at users.sourceforge.net> writes:

    Fred> "Modules/Setup.in is newer than Moodules/Setup;"; \ !  echo
------------------------------------------^^^
who let the cows in here?



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 00:32:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 00:32:50 +0200 (CEST)
Subject: [Python-Dev] lookdict
Message-ID: <200008312232.AAA14305@python.inrialpes.fr>

I'd like to request some clarifications on the recently checked
dict patch. How it is supposed to work and why is this solution okay?

What's the exact purpose of the 2nd string specialization patch?

Besides that, I must say that now the interpreter is noticeably slower
and MAL and I were warning you kindly about this code, which was
fine tuned over the years. It is very sensible and was optimized to death.
The patch that did make it was labeled "not ready" and I would have
appreciated another round of review. Not that I disagree, but now I feel
obliged to submit another patch to make some obvious perf improvements
(at least), which simply duplicates work... Fred would have done them
very well, but I haven't had the time to say much about the implementation
because the laconic discussion on the Patch Manager went about
functionality.

Now I'd like to bring this on python-dev and see what exactly happened
to lookdict and what the BeOpen team agreed on regarding this function.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gstein at lyra.org  Fri Sep  1 03:51:04 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 18:51:04 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <14766.65024.122762.332972@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 31, 2000 at 08:53:20PM -0400
References: <200009010002.RAA23432@slayer.i.sourceforge.net> <14766.65024.122762.332972@bitdiddle.concentric.net>
Message-ID: <20000831185103.D3278@lyra.org>

On Thu, Aug 31, 2000 at 08:53:20PM -0400, Jeremy Hylton wrote:
> Any opinion on whether the Py_SetRecursionLimit should do sanity
> checking on its arguments?

-1 ... it's an advanced function. It's the caller's problem if they monkey
it up.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From gstein at lyra.org  Fri Sep  1 04:12:08 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 19:12:08 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <200009010002.RAA23432@slayer.i.sourceforge.net>; from tim_one@users.sourceforge.net on Thu, Aug 31, 2000 at 05:02:01PM -0700
References: <200009010002.RAA23432@slayer.i.sourceforge.net>
Message-ID: <20000831191208.G3278@lyra.org>

On Thu, Aug 31, 2000 at 05:02:01PM -0700, Tim Peters wrote:
> Update of /cvsroot/python/python/dist/src/Python
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv20859/python/dist/src/Python
> 
> Modified Files:
> 	ceval.c 
> Log Message:
> Supply missing prototypes for new Py_{Get,Set}RecursionLimit; fixes compiler wngs;
> un-analize Get's definition ("void" is needed only in declarations, not defns, &
> is generally considered bad style in the latter).

wtf? Placing a void in both declaration *and* definition is "good style".

static int foo(void) { ... }
int bar() { ... }

You're setting yourself up for inconsistency if you don't always use a
prototypical definition. In the above example, foo() must be
declared/defined using a prototype (or you get warnings from gcc when you
compile with -Wmissing-prototypes (which is recommended for developers)).
But you're saying bar() should *not* have a prototype.


-1 on dropping the "void" from the definition. I disagree it is bad form,
and it sets us up for inconsistencies.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From gward at python.net  Fri Sep  1 04:10:47 2000
From: gward at python.net (Greg Ward)
Date: Thu, 31 Aug 2000 19:10:47 -0700
Subject: [Python-Dev] ANNOUNCE: Distutils 0.9.2
Message-ID: <20000831191047.C31473@python.net>

...just in time for the Python 2.0b1 feature freeze, Distutils 0.9.2 has
been released.  Changes since 0.9.1:

  * fixed bug that broke extension-building under Windows for older
    setup scripts (not using the new Extension class)
      
  * new version of bdist_wininst command and associated tools: fixes
    some bugs, produces a smaller exeuctable, and has a nicer GUI
    (thanks to Thomas Heller)
		
  * added some hooks to 'setup()' to allow some slightly sneaky ways
    into the Distutils, in addition to the standard "run 'setup()'
    from a setup script"
	
Get your copy today:

  http://www.python.org/sigs/distutils-sig/download.html
  
        Greg



From jeremy at beopen.com  Fri Sep  1 04:40:25 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 22:40:25 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
Message-ID: <14767.5913.521593.234904@bitdiddle.concentric.net>

Quick note on BDFL-approved style for C code.

I recently changed a line in gcmodule.c from
static int debug;
to 
static int debug = 0;

The change is redundant, as several people pointed out, because the C
std requires debug to be initialized to 0.  I didn't realize this.
Inadvertently, however, I made the right change.  The preferred style
is to be explicit about initialization if other code depends on or
assumes that it is initialized to a particular value -- even if that
value is 0.

If the code is guaranteed to do an assignment of its own before the
first use, it's okay to omit the initialization with the decl.

Jeremy






From greg at cosc.canterbury.ac.nz  Fri Sep  1 04:37:36 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 01 Sep 2000 14:37:36 +1200 (NZST)
Subject: [Python-Dev] Pragmas: Just say "No!"
In-Reply-To: <39AE5E79.C2C91730@lemburg.com>
Message-ID: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal at lemburg.com>:

> If it's just the word itself that's bugging you, then
> we can have a separate discussion on that. Perhaps "assume"
> or "declare" would be a better candidates.

Yes, "declare" would be better. ALthough I'm still somewhat
uncomfortable with the idea of naming a language feature
before having a concrete example of what it's going to be
 used for.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From guido at beopen.com  Fri Sep  1 05:54:10 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 22:54:10 -0500
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: Your message of "Thu, 31 Aug 2000 18:16:55 EST."
             <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org> 
References: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org> 
Message-ID: <200009010354.WAA30234@cj20424-a.reston1.va.home.com>

> A quick glance at audiodev.py shows that it seems to support only
> Sun and SGI.  Should it be extended?

Yes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep  1 06:00:37 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:00:37 -0500
Subject: [Python-Dev] Namespace collision between lib/xml and site-packages/xml
In-Reply-To: Your message of "Fri, 01 Sep 2000 06:29:47 +0800."
             <39AEDC5B.333F737E@per.dem.csiro.au> 
References: <39AEDC5B.333F737E@per.dem.csiro.au> 
Message-ID: <200009010400.XAA30273@cj20424-a.reston1.va.home.com>

> On July 26 I reported that the new xml package in the standard library
> collides with and overrides the xml package from the xml-sig that may be
> installed in site-packages. This is still the case. The new package does
> not have the same functionality as the one in site-packages, and hence
> my application (and others relying on similar functionality) gets an
> import error. I understood that it was planned that the new library xml
> package would check for the site-package version, and transparently hand
> over to it if it existed. It's not really an option to remove/rename the
> xml package in the std lib, or to break existing xml-based code...
> 
> Of course, this might be fixed by 2.0b1, or is it a feature that will be
> frozen out <wry smile>?
> 
> Fred's response was:
> "  I expect we'll be making the package in site-packages an extension
> provider for the xml package in the standard library.  I'm planning to
> discuss this issue at today's PythonLabs meeting." 

I remember our group discussion about this.  What's currently
implemented is what we decided there, based upon (Fred's
representation of) what the XML-sig wanted.  So you don't like this
either, right?

I believe there are two conflicting desires here: (1) the standard XML
package by the core should be named simply "xml"; (2) you want the old
XML-sig code (which lives in a package named "xml" but installed in
site-packages) to override the core xml package.

I don't think that's possible -- at least not without a hack that's
too ugly to accept.

You might be able to get the old XML-sig code to override the core xml
package by creating a symlink named _xmlplus to it in site-packages
though.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep  1 06:04:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:04:02 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: Your message of "Thu, 31 Aug 2000 19:12:08 MST."
             <20000831191208.G3278@lyra.org> 
References: <200009010002.RAA23432@slayer.i.sourceforge.net>  
            <20000831191208.G3278@lyra.org> 
Message-ID: <200009010404.XAA30306@cj20424-a.reston1.va.home.com>

> You're setting yourself up for inconsistency if you don't always use a
> prototypical definition. In the above example, foo() must be
> declared/defined using a prototype (or you get warnings from gcc when you
> compile with -Wmissing-prototypes (which is recommended for developers)).
> But you're saying bar() should *not* have a prototype.
> 
> 
> -1 on dropping the "void" from the definition. I disagree it is bad form,
> and it sets us up for inconsistencies.

We discussed this briefly today in our group chat, and I'm +0 or
Greg's recommendation (that's +0 on keeping (void) in definitions).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Fri Sep  1 05:12:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:12:25 -0400
Subject: [Python-Dev] RE: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <20000831191208.G3278@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFJHDAA.tim_one@email.msn.com>

[Greg Stein]
> ...
> static int foo(void) { ... }
> int bar() { ... }
>
> You're setting yourself up for inconsistency if you don't always use a
> prototypical definition. In the above example, foo() must be
> declared/defined using a prototype (or you get warnings from gcc when you
> compile with -Wmissing-prototypes (which is recommended for developers)).
> But you're saying bar() should *not* have a prototype.

This must be about the pragmatics of gcc, as the C std doesn't say any of
that stuff -- to the contrary, in a *definition* (as opposed to a
declaration), bar() and bar(void) are identical in meaning (as far as the
std goes).

But I confess I don't use gcc at the moment, and have mostly used C
grudgingly the past 5 years when porting things to C++, and my "bad style"
really came from the latter (C++ doesn't cater to K&R-style decls or
"Miranda prototypes" at all, so "thing(void)" is just an eyesore there).

> -1 on dropping the "void" from the definition. I disagree it is bad form,
> and it sets us up for inconsistencies.

Good enough for me -- I'll change it back.






From fdrake at beopen.com  Fri Sep  1 05:28:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 23:28:59 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: <14767.5913.521593.234904@bitdiddle.concentric.net>
References: <14767.5913.521593.234904@bitdiddle.concentric.net>
Message-ID: <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > The change is redundant, as several people pointed out, because the C
 > std requires debug to be initialized to 0.  I didn't realize this.
 > Inadvertently, however, I made the right change.  The preferred style
 > is to be explicit about initialization if other code depends on or
 > assumes that it is initialized to a particular value -- even if that
 > value is 0.

  According to the BDFL?  He's told me *not* to do that if setting it
to 0 (or NULL, in case of a pointer), but I guess that was several
years ago now (before I went to CNRI, I think).
  I need to get a style guide written, I suppose!  -sigh-
  (I agree the right thing is to use explicit initialization, and
would go so far as to say to *always* use it for readability and
robustness in the face of changing code.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Fri Sep  1 05:37:41 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 23:37:41 -0400 (EDT)
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: <14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
References: <14767.5913.521593.234904@bitdiddle.concentric.net>
	<14767.8827.492944.536878@cj42289-a.reston1.va.home.com>
Message-ID: <14767.9349.324188.289319@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Jeremy Hylton writes:
  >> The change is redundant, as several people pointed out, because
  >> the C std requires debug to be initialized to 0.  I didn't
  >> realize this.  Inadvertently, however, I made the right change.
  >> The preferred style is to be explicit about initialization if
  >> other code depends on or assumes that it is initialized to a
  >> particular value -- even if that value is 0.

  FLD>   According to the BDFL?  He's told me *not* to do that if
  FLD>   setting it
  FLD> to 0 (or NULL, in case of a pointer), but I guess that was
  FLD> several years ago now (before I went to CNRI, I think).

It's these chat sessions.  They bring out the worst in him <wink>.

Jeremy



From guido at beopen.com  Fri Sep  1 06:36:05 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 23:36:05 -0500
Subject: [Python-Dev] static int debug = 0;
In-Reply-To: Your message of "Thu, 31 Aug 2000 23:28:59 -0400."
             <14767.8827.492944.536878@cj42289-a.reston1.va.home.com> 
References: <14767.5913.521593.234904@bitdiddle.concentric.net>  
            <14767.8827.492944.536878@cj42289-a.reston1.va.home.com> 
Message-ID: <200009010436.XAA06824@cj20424-a.reston1.va.home.com>

> Jeremy Hylton writes:
>  > The change is redundant, as several people pointed out, because the C
>  > std requires debug to be initialized to 0.  I didn't realize this.
>  > Inadvertently, however, I made the right change.  The preferred style
>  > is to be explicit about initialization if other code depends on or
>  > assumes that it is initialized to a particular value -- even if that
>  > value is 0.

Fred:
>   According to the BDFL?  He's told me *not* to do that if setting it
> to 0 (or NULL, in case of a pointer), but I guess that was several
> years ago now (before I went to CNRI, I think).

Can't remember that now.  I told Jeremy what he wrote here.

>   I need to get a style guide written, I suppose!  -sigh-

Yes!

>   (I agree the right thing is to use explicit initialization, and
> would go so far as to say to *always* use it for readability and
> robustness in the face of changing code.)

No -- initializing variables that are assigned to first thing later is
less readable.  The presence or absence of the initialization should
be a subtle hint on whether the initial value is used.  If the code
changes, change the initialization.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Fri Sep  1 05:40:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:40:47 -0400
Subject: [Python-Dev] test_popen2 broken on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>

FYI, we know that test_popen2 is broken on Windows.  I'm in the process of
fixing it.





From fdrake at beopen.com  Fri Sep  1 05:42:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 23:42:59 -0400 (EDT)
Subject: [Python-Dev] test_popen2 broken on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEFLHDAA.tim_one@email.msn.com>
Message-ID: <14767.9667.205457.791956@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > FYI, we know that test_popen2 is broken on Windows.  I'm in the process of
 > fixing it.

  If you can think of a good test case for os.popen4(), I'd love to
see it!  I couldn't think of one earlier that even had a remote chance
of being portable.  If you can make one that passes on Windows, I'll
either adapt it or create an alternate for Unix.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Fri Sep  1 05:55:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 23:55:41 -0400
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
In-Reply-To: <20000831082821.B3569@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFMHDAA.tim_one@email.msn.com>

{Trent Mick]
> Tim (or anyone with python-list logs), can you forward this to Sachin (who
> reported the bug).

Sorry for not getting back to you sooner,  I just fwd'ed the fellow's
problem as an FYI for the Python-Dev'ers, not as something crucial for
2.0b1.  His symptom is a kernel panic in what looked like a pre-release OS,
and that's certainly not your fault!  Like he said:

>> I guess my next step is to log a bug with Apple.

Since nobody else spoke up, I'll fwd your msg to him eventually, but that
will take a little time to find his address via DejaNews, & it's not a
priority tonight.





From tim_one at email.msn.com  Fri Sep  1 06:03:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 00:03:18 -0400
Subject: [Python-Dev] test_popen2 broken on Windows
In-Reply-To: <14767.9667.205457.791956@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEFNHDAA.tim_one@email.msn.com>

[Fred]
>   If you can think of a good test case for os.popen4(), I'd love to
> see it!  I couldn't think of one earlier that even had a remote chance
> of being portable.  If you can make one that passes on Windows, I'll
> either adapt it or create an alternate for Unix.  ;)

Not tonight.  I've never used popen4 in my life, and disapprove of almost
all functions with trailing digits in their names.  Also most movies,  and
especially after "The Hidden 2".  How come nobody writes song sequels?
"Stairway to Heaven 2", say, or "Beethoven's Fifth Symphony 3"?  That's one
for Barry to ponder ...

otoh-trailing-digits-are-a-sign-of-quality-in-an-os-name-ly y'rs  - tim





From Mark.Favas at per.dem.csiro.au  Fri Sep  1 09:31:57 2000
From: Mark.Favas at per.dem.csiro.au (Favas, Mark (EM, Floreat))
Date: Fri, 1 Sep 2000 15:31:57 +0800 
Subject: [Python-Dev] Namespace collision between lib/xml and site-pac
	kages/xml
Message-ID: <C03F68DA202BD411B00700B0D022B09E1AD950@martok.wa.CSIRO.AU>

Guido wrote:
>I remember our group discussion about this.  What's currently
>implemented is what we decided there, based upon (Fred's
>representation of) what the XML-sig wanted.  So you don't like this
>either, right?

Hey - not so. I saw the original problem, asked about it, was told it would
be discussed, heard nothing of the results of the disccussion, saw that I
still had the same problem close to the release of 2.0b1, thought maybe it
had slipped through the cracks, and asked again in an effort to help. I
apologise if it came across in any other way.

>I believe there are two conflicting desires here: (1) the standard XML
>package by the core should be named simply "xml"; (2) you want the old
>XML-sig code (which lives in a package named "xml" but installed in
>site-packages) to override the core xml package.

I'm happy with (1) being the standard XML package - I thought from Fred's
original post that there might be some way of having both work together. 

>I don't think that's possible -- at least not without a hack that's
>too ugly to accept.

Glad to have this clarified.

>You might be able to get the old XML-sig code to override the core xml
>package by creating a symlink named _xmlplus to it in site-packages
>though.

Thanks for the suggestion - I'll try it. Since my code has to run on Windows
as well, probably the best thing I can do is bundle up the xml-sig stuff in
my distribution, call it something else, and get around it all that way.

Mark



From thomas at xs4all.net  Fri Sep  1 09:41:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 09:41:24 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python ceval.c,2.200,2.201
In-Reply-To: <200009010002.RAA23432@slayer.i.sourceforge.net>; from tim_one@users.sourceforge.net on Thu, Aug 31, 2000 at 05:02:01PM -0700
References: <200009010002.RAA23432@slayer.i.sourceforge.net>
Message-ID: <20000901094123.L12695@xs4all.nl>

On Thu, Aug 31, 2000 at 05:02:01PM -0700, Tim Peters wrote:

> Log Message:
> Supply missing prototypes for new Py_{Get,Set}RecursionLimit; fixes compiler wngs;
> un-analize Get's definition ("void" is needed only in declarations, not defns, &
> is generally considered bad style in the latter).

Funny. I asked this while ANSIfying, and opinions where, well, scattered :)
There are a lot more where that one came from. (See the Modules/ subdir
<wink>)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Sep  1 09:54:09 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 09:54:09 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.50,2.51
In-Reply-To: <200009010239.TAA27288@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Thu, Aug 31, 2000 at 07:39:03PM -0700
References: <200009010239.TAA27288@slayer.i.sourceforge.net>
Message-ID: <20000901095408.M12695@xs4all.nl>

On Thu, Aug 31, 2000 at 07:39:03PM -0700, Guido van Rossum wrote:

> Add parens suggested by gcc -Wall.

No! This groups the checks wrong. HASINPLACE(v) *has* to be true for any of
the other tests to happen. I apologize for botching the earlier 2 versions
and failing to check them, I've been a bit swamped in work the past week :P
I've checked them in the way they should be. (And checked, with gcc -Wall,
this time. The error is really gone.)

> ! 	else if (HASINPLACE(v)
>   		  && ((v->ob_type->tp_as_sequence != NULL &&
> ! 		      (f = v->ob_type->tp_as_sequence->sq_inplace_concat) != NULL))
>   		 || (v->ob_type->tp_as_number != NULL &&
>   		     (f = v->ob_type->tp_as_number->nb_inplace_add) != NULL))
> --- 814,821 ----
>   			return x;
>   	}
> ! 	else if ((HASINPLACE(v)
>   		  && ((v->ob_type->tp_as_sequence != NULL &&
> ! 		       (f = v->ob_type->tp_as_sequence->sq_inplace_concat)
> ! 		       != NULL)))
>   		 || (v->ob_type->tp_as_number != NULL &&
>   		     (f = v->ob_type->tp_as_number->nb_inplace_add) != NULL))

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Fri Sep  1 10:43:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 10:43:56 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz>
Message-ID: <39AF6C4C.62451C87@lemburg.com>

Greg Ewing wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com>:
> 
> > If it's just the word itself that's bugging you, then
> > we can have a separate discussion on that. Perhaps "assume"
> > or "declare" would be a better candidates.
> 
> Yes, "declare" would be better. ALthough I'm still somewhat
> uncomfortable with the idea of naming a language feature
> before having a concrete example of what it's going to be
>  used for.

I gave some examples in the other pragma thread. The main
idea behind "declare" is to define flags at compilation
time, the encoding of string literals being one of the
original motivations for introducing these flags:

declare encoding = "latin-1"
x = u"This text will be interpreted as Latin-1 and stored as Unicode"

declare encoding = "ascii"
y = u"This is supposed to be ASCII, but contains ??? Umlauts - error !"

A similar approach could be done for 8-bit string literals
provided that the default encoding allows storing the
decoded values.

Say the default encoding is "utf-8", then you could write:

declare encoding = "latin-1"
x = "These are the German Umlauts: ???"
# x would the be assigned the corresponding UTF-8 value of that string

Another motivation for using these flags is providing the
compiler with information about possible assumptions it
can make:

declare globals = "constant"

The compiler can then add code which caches all global
lookups in locals for subsequent use.

The reason I'm advertising a new keyword is that we need
a way to tell the compiler about these things from within
the source file. This is currently not possible, but is needed
to allow different modules (from possibly different authors)
to work together without the need to adapt their source
files.

Which flags will actually become available is left to 
a different discussion.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  1 10:55:09 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 10:55:09 +0200
Subject: [Python-Dev] lookdict
References: <200008312232.AAA14305@python.inrialpes.fr>
Message-ID: <39AF6EED.7A591932@lemburg.com>

Vladimir Marangozov wrote:
> 
> I'd like to request some clarifications on the recently checked
> dict patch. How it is supposed to work and why is this solution okay?
> 
> What's the exact purpose of the 2nd string specialization patch?
> 
> Besides that, I must say that now the interpreter is noticeably slower
> and MAL and I were warning you kindly about this code, which was
> fine tuned over the years. It is very sensible and was optimized to death.
> The patch that did make it was labeled "not ready" and I would have
> appreciated another round of review. Not that I disagree, but now I feel
> obliged to submit another patch to make some obvious perf improvements
> (at least), which simply duplicates work... Fred would have done them
> very well, but I haven't had the time to say much about the implementation
> because the laconic discussion on the Patch Manager went about
> functionality.
> 
> Now I'd like to bring this on python-dev and see what exactly happened
> to lookdict and what the BeOpen team agreed on regarding this function.

Just for the record:

Python 1.5.2: 3050 pystones
Python 2.0b1: 2850 pystones before the lookup patch
              2900 pystones after the lookup patch
My old considerably patched Python 1.5:
              4000 pystones

I like Fred's idea about the customized and auto-configuring
lookup mechanism. This should definitely go into 2.1... perhaps
even with a hook that allows C extensions to drop in their own
implementations for certain types of dictionaries, e.g. ones
using perfect hash tables.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Fri Sep  1 11:11:15 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 1 Sep 2000 05:11:15 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.58306.977241.439169@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10009010506380.1061-100000@skuld.lfw.org>

On Thu, 31 Aug 2000, Charles G Waldman wrote:
>  >     3. Play some raw audio data, given a string of bytes and a
>  >        sampling rate.
> 
> This would never be possible unless you also specifed the format and
> encoding of the raw data - are they 8bit, 16-bit, signed, unsigned,
> bigendian, littlendian, linear, logarithmic ("mu_law"), etc?

You're right, you do have to specify such things.  But when you
do, i'm quite confident that this should be possible, at least
for a variety of common cases.  Certainly raw audio data should
be playable in at least *some* fashion, and we also have a bunch
of very nice functions in the audioop module that can do automatic
conversions if we want to get fancy.

> Trying to do anything with sound in a
> platform-independent manner is near-impossible.  Even the same
> "platform" (e.g. RedHat 6.2 on Intel) will behave differently
> depending on what soundcard is installed.

Are you talking about OSS vs. ALSA?  Didn't they at least try to
keep some of the basic parts of the interface the same?


-- ?!ng




From moshez at math.huji.ac.il  Fri Sep  1 11:42:58 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 1 Sep 2000 12:42:58 +0300 (IDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.42287.968420.289804@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10009011242120.22219-100000@sundial>

On Thu, 31 Aug 2000, Jeremy Hylton wrote:

> Is the test for linuxaudiodev supposed to play the Spanish Inquistion
> .au file?  I just realized that the test does absolutely nothing on my
> machine.  (I guess I need to get my ears to raise an exception if they
> don't hear anything.)
> 
> I can play the .au file and I use a variety of other audio tools
> regularly.  Is Peter still maintaining it or can someone else offer
> some assistance?

It's probably not the case, but check it isn't skipped. I've added code to
liberally skip it in case the user has no permission or no soundcard.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Sep  1 13:34:46 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 07:34:46 -0400
Subject: [Python-Dev] Prerelease Python fun on Windows!
Message-ID: <LNBBLJKPBEHFEDALKOLCIEGJHDAA.tim_one@email.msn.com>

A prerelease of the Python2.0b1 Windows installer is now available via
anonymous FTP, from

    python.beopen.com

file

    /pub/windows/beopen-python2b1p1-20000901.exe
    5,766,988 bytes

Be sure to set FTP Binary mode before you get it.

This is not *the* release.  Indeed, the docs are still from some old
pre-beta version of Python 1.6 (sorry, Fred, but I'm really sleepy!).  What
I'm trying to test here is the installer, and the basic integrity of the
installation.  A lot has changed, and we hope all for the better.

Points of particular interest:

+ I'm running a Win98SE laptop.  The install works great for me.  How
  about NT?  2000?  95?  ME?  Win64 <shudder>?

+ For the first time ever, the Windows installer should *not* require
  adminstrator privileges under NT or 2000.  This is untested.  If you
  log in as an adminstrator, it should write Python's registry info
  under HKEY_LOCAL_MACHINE.  If not an adminstrator, it should pop up
  an informative message and write the registry info under
  HKEY_CURRENT_USER instead.  Does this work?  This prerelease includes
  a patch from Mark Hammond that makes Python look in HKCU before HKLM
  (note that that also allows users to override the HKLM settings, if
  desired).

+ Try
    python lib/test/regrtest.py

  test_socket is expected to fail if you're not on a network, or logged
  into your ISP, at the time your run the test suite.  Otherwise
  test_socket is expected to pass.  All other tests are expected to
  pass (although, as always, a number of Unix-specific tests should get
  skipped).

+ Get into a DOS-box Python, and try

      import Tkinter
      Tkinter._test()

  This installation of Python should not interfere with, or be damaged
  by, any other installation of Tcl/Tk you happen to have lying around.
  This is also the first time we're using Tcl/Tk 8.3.2, and that needs
  wider testing too.

+ If the Tkinter test worked, try IDLE!
  Start -> Programs -> Python20 -> IDLE.

+ There is no time limit on this installation.  But if you use it for
  more than 30 days, you're going to have to ask us to pay you <wink>.

windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim





From nascheme at enme.ucalgary.ca  Fri Sep  1 15:34:46 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 07:34:46 -0600
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <200009010401.VAA20868@slayer.i.sourceforge.net>; from Jeremy Hylton on Thu, Aug 31, 2000 at 09:01:59PM -0700
References: <200009010401.VAA20868@slayer.i.sourceforge.net>
Message-ID: <20000901073446.A4782@keymaster.enme.ucalgary.ca>

On Thu, Aug 31, 2000 at 09:01:59PM -0700, Jeremy Hylton wrote:
> set the default threshold much higher
> we don't need to run gc frequently

Are you sure setting it that high (5000 as opposed to 100) is a good
idea?  Did you do any benchmarking?  If with-gc is going to be on by
default in 2.0 then I would agree with setting it high.  If the GC is
optional then I think it should be left as it is.  People explicitly
enabling the GC obviously have a problem with cyclic garbage.

So, is with-gc going to be default?  At this time I would vote no.

  Neil



From jeremy at beopen.com  Fri Sep  1 16:24:46 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 1 Sep 2000 10:24:46 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <20000901073446.A4782@keymaster.enme.ucalgary.ca>
References: <200009010401.VAA20868@slayer.i.sourceforge.net>
	<20000901073446.A4782@keymaster.enme.ucalgary.ca>
Message-ID: <14767.48174.81843.299662@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

  NS> On Thu, Aug 31, 2000 at 09:01:59PM -0700, Jeremy Hylton wrote:
  >> set the default threshold much higher we don't need to run gc
  >> frequently

  NS> Are you sure setting it that high (5000 as opposed to 100) is a
  NS> good idea?  Did you do any benchmarking?  If with-gc is going to
  NS> be on by default in 2.0 then I would agree with setting it high.
  NS> If the GC is optional then I think it should be left as it is.
  NS> People explicitly enabling the GC obviously have a problem with
  NS> cyclic garbage.

  NS> So, is with-gc going to be default?  At this time I would vote
  NS> no.

For 2.0b1, it will be on by default, which is why I set the threshold
so high.  If we get a lot of problem reports, we can change either
decision for 2.0 final.

Do you disagree?  If so, why?

Even people who do have problems with cyclic garbage don't necessarily
need a collection every 100 allocations.  (Is my understanding of what
the threshold measures correct?)  This threshold causes GC to occur so
frequently that it can happen during the *compilation* of a small
Python script.

Example: The code in Tools/compiler seems to have a cyclic reference
problem, because it's memory consumption drops when GC is enabled.
But the difference in total memory consumption with the threshold at
100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

Jeremy



From skip at mojam.com  Fri Sep  1 16:13:39 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 09:13:39 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
Message-ID: <14767.47507.843792.223790@beluga.mojam.com>

I'm trying to get Zope 2.2.1 to build to I can use gc to track down a memory 
leak.  In working my way through some compilation errors I noticed that
Zope's cPickle.c appears to be somewhat different than Python's version.
(Haven't checked cStringIO.c yet, but I imagine there may be a couple
differences there as well.)

Should we try to sync them up before 2.0b1?  Before 2.0final?  Wait until
2.1?  If so, should I post a patch to the SourceForge Patch Manager or send
diffs to Jim (or both)?

Skip



From thomas at xs4all.net  Fri Sep  1 16:34:52 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 16:34:52 +0200
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEGJHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Sep 01, 2000 at 07:34:46AM -0400
Message-ID: <20000901163452.N12695@xs4all.nl>

On Fri, Sep 01, 2000 at 07:34:46AM -0400, Tim Peters wrote:

> + I'm running a Win98SE laptop.  The install works great for me.  How
>   about NT?  2000?  95?  ME?  Win64 <shudder>?

It runs fine under Win98 (FE) on my laptop.

> + Try
>     python lib/test/regrtest.py

No strange failures.

> + Get into a DOS-box Python, and try
> 
>       import Tkinter
>       Tkinter._test()
> 
>   This installation of Python should not interfere with, or be damaged
>   by, any other installation of Tcl/Tk you happen to have lying around.
>   This is also the first time we're using Tcl/Tk 8.3.2, and that needs
>   wider testing too.

Correctly uses 8.3.2, and not the 8.1 (or so) that came with Python 1.5.2

> + If the Tkinter test worked, try IDLE!
>   Start -> Programs -> Python20 -> IDLE.

Works, too. I had a funny experience, though. I tried to quit the
interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.
And then I started IDLE, and IDLE started up, the menus worked, I could open
a new window, but I couldn't type anything. And then I had a bluescreen. But
after the reboot, everything worked fine, even doing the exact same things.

Could just be windows crashing on me, it does that often enough, even on
freshly installed machines. Something about bad karma or something ;)

> + There is no time limit on this installation.  But if you use it for
>   more than 30 days, you're going to have to ask us to pay you <wink>.

> windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim

"Hmmm... I think I'll call you lunch."

(Well, Windows may not be green, but it's definately not ripe yet! Not for
me, anyway :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Sep  1 17:43:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 10:43:32 -0500
Subject: [Python-Dev] _PyPclose
Message-ID: <200009011543.KAA09487@cj20424-a.reston1.va.home.com>

The _PyPclose fix looks good, Tim!

The sad thing is that if they had implemented their own data structure
to keep track of the mapping between files and processes, none of this
would have been necessary.  Look:

_PyPopenProcs is a dictionary whose keys are FILE* pointers wrapped in
Python longs, and whose values are lists of length 2 containing a
process handle and a file count.  Pseudocode:

# global:
    _PyPopenProcs = None

# in _PyPopen:
    global _PyPopenProcs
    if _PyPopenProcs is None:
        _PyPopenProcs = {}
    files = <list of files created>
    list = [process_handle, len(files)]
    for file in files:
	_PyPopenProcs[id(file)] = list

# in _PyPclose(file):
    global _PyPopenProcs
    list = _PyPopenProcs[id(file)]
    nfiles = list[1]
    if nfiles > 1:
	list[1] = nfiles-1
    else:
	<wait for the process status>
    del _PyPopenProcs[id(file)]
    if len(_PyPopenProcs) == 0:
        _PyPopenProcs = None

This expands to pages of C code!  There's a *lot* of code dealing with
creating the Python objects, error checking, etc.  I bet that it all
would become much smaller and more readable if a custom C-based data
structure was used.  A linked list associating files with processes
would be all that's needed.  We can even aford a linear search of the
list to see if we just closed the last file open for this process.

Sigh.  Maybe for another time.

(That linked list would require a lock of its own.  Fine.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Fri Sep  1 17:03:30 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 10:03:30 -0500 (CDT)
Subject: [Python-Dev] DEBUG_SAVEALL feature for gc not in 2.0b1?
Message-ID: <14767.50498.896689.445018@beluga.mojam.com>


Neil sent me a patch a week or two ago that implemented a DEBUG_SAVEALL flag
for the gc module.  If set, it assigns all cyclic garbage to gc.garbage
instead of deleting it, thus resurrecting the garbage so you can inspect it.
This seems not to have made it into the CS repository.

I think this is good mojo and deserves to be in the distribution, if not for
the release, then for 2.1 at least.  I've attached the patch Neil sent me
(which includes code, doc and test updates).  It's helped me track down one
(stupid) cyclic trash bug in my own code.  Neil, unless there are strong
arguments to the contrary, I recommend you submit a patch to SF.

Skip

-------------- next part --------------
A non-text attachment was scrubbed...
Name: saveall.patch
Type: application/octet-stream
Size: 9275 bytes
Desc: patch to get gc to resurrect garbage instead of freeing it
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000901/b387b9cb/attachment-0001.obj>

From guido at beopen.com  Fri Sep  1 18:31:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 11:31:26 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 10:55:09 +0200."
             <39AF6EED.7A591932@lemburg.com> 
References: <200008312232.AAA14305@python.inrialpes.fr>  
            <39AF6EED.7A591932@lemburg.com> 
Message-ID: <200009011631.LAA09876@cj20424-a.reston1.va.home.com>

Thanks, Marc-Andre, for pointing out that Fred's lookdict code is
actually an improvement.

The reason for all this is that we found that lookdict() calls
PyObject_Compare() without checking for errors.  If there's a key that
raises an error when compared to another key, the keys compare unequal
and an exception is set, which may disturb an exception that the
caller of PyDict_GetItem() might be calling.  PyDict_GetItem() is
documented as never raising an exception.  This is actually not strong
enough; it was actually intended to never clear an exception either.
The potential errors from PyObject_Compare() violate this contract.
Note that these errors are nothing new; PyObject_Compare() has been
able to raise exceptions for a long time, e.g. from errors raised by
__cmp__().

The first-order fix is to call PyErr_Fetch() and PyErr_restore()
around the calls to PyObject_Compare().  This is slow (for reasons
Vladimir points out) even though Fred was very careful to only call
PyErr_Fetch() or PyErr_Restore() when absolutely necessary and only
once per lookdict call.  The second-order fix therefore is Fred's
specialization for string-keys-only dicts.

There's another problem: as fixed, lookdict needs a current thread
state!  (Because the exception state is stored per thread.)  There are
cases where PyDict_GetItem() is called when there's no thread state!
The first one we found was Tim Peters' patch for _PyPclose (see
separate message).  There may be others -- we'll have to fix these
when we find them (probably after 2.0b1 is released but hopefully
before 2.0 final).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Fri Sep  1 17:42:01 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 1 Sep 2000 11:42:01 -0400
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <14767.47507.843792.223790@beluga.mojam.com>; from skip@mojam.com on Fri, Sep 01, 2000 at 09:13:39AM -0500
References: <14767.47507.843792.223790@beluga.mojam.com>
Message-ID: <20000901114201.B5855@kronos.cnri.reston.va.us>

On Fri, Sep 01, 2000 at 09:13:39AM -0500, Skip Montanaro wrote:
>leak.  In working my way through some compilation errors I noticed that
>Zope's cPickle.c appears to be somewhat different than Python's version.
>(Haven't checked cStringIO.c yet, but I imagine there may be a couple
>differences there as well.)

There are also diffs in cStringIO.c, though not ones that affect
functionality: ANSI-fication, and a few changes to the Python API
(PyObject_Length -> PyObject_Size, PyObject_NEW -> PyObject_New, &c).

The cPickle.c changes look to be:
    * ANSIfication.
    * API changes.
    * Support for Unicode strings.

The API changes are the most annoying ones, since you need to add
#ifdefs in order for the module to compile with both 1.5.2 and 2.0.
(Might be worth seeing if this can be alleviated with a few strategic
macros, though I think not...)

--amk




From nascheme at enme.ucalgary.ca  Fri Sep  1 17:48:21 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 09:48:21 -0600
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules gcmodule.c,2.9,2.10
In-Reply-To: <14767.48174.81843.299662@bitdiddle.concentric.net>; from Jeremy Hylton on Fri, Sep 01, 2000 at 10:24:46AM -0400
References: <200009010401.VAA20868@slayer.i.sourceforge.net> <20000901073446.A4782@keymaster.enme.ucalgary.ca> <14767.48174.81843.299662@bitdiddle.concentric.net>
Message-ID: <20000901094821.A5571@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
> Even people who do have problems with cyclic garbage don't necessarily
> need a collection every 100 allocations.  (Is my understanding of what
> the threshold measures correct?)

It collects every net threshold0 allocations.  If you create and delete
1000 container objects in a loop then no collection would occur.

> But the difference in total memory consumption with the threshold at
> 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

The last time I did benchmarks with PyBench and pystone I found that the
difference between threshold0 = 100 and threshold0 = 0 (ie. infinity)
was small.  Remember that the collector only counts container objects.
Creating a thousand dicts with lots of non-container objects inside of
them could easily cause an out of memory situation.

Because of the generational collection usually only threshold0 objects
are examined while collecting.  Thus, setting threshold0 low has the
effect of quickly moving objects into the older generations.  Collection
is quick because only a few objects are examined.  

A portable way to find the total allocated memory would be nice.
Perhaps Vladimir's malloc will help us here.  Alternatively we could
modify PyCore_MALLOC to keep track of it in a global variable.  I think
collecting based on an increase in the total allocated memory would work
better.  What do you think?

More benchmarks should be done too.  Your compiler would probably be a
good candidate.  I won't have time today but maybe tonight.

  Neil



From gward at mems-exchange.org  Fri Sep  1 17:49:45 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 1 Sep 2000 11:49:45 -0400
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>; from ping@lfw.org on Thu, Aug 31, 2000 at 06:16:55PM -0500
References: <14766.50976.102853.695767@buffalo.fnal.gov> <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
Message-ID: <20000901114945.A15688@ludwig.cnri.reston.va.us>

On 31 August 2000, Ka-Ping Yee said:
> Just so you know -- i was definitely able to get this to work at
> some point before when we were trying to fix this.  I changed
> test_linuxaudiodev and it played the .AU file correctly.  I haven't
> had time to survey what the state of the various modules is now,
> though -- i'll have a look around and see what's going on.

I have three copies of test_linuxaudiodev.py in my Lib/test directory:
the original, Ping's version, and Michael Hudson's version.  I can't
remember who hacked whose, ie. if Michael or Ping's is earlier.
Regardless, none of them work.  Here's how they fail:

$ /python Lib/test/regrtest.py test_linuxaudiodev
test_linuxaudiodev
1 test OK.

...but the sound is horrible: various people opined on this list, many
months ago when I first reported the problem, that it's probably a
format problem.  (The wav/au mixup seems a likely candidate; it can't be
an endianness problem, since the .au file is 8-bit!)

$ ./python Lib/test/regrtest.py test_linuxaudiodev-ping
test_linuxaudiodev-ping
Warning: can't open Lib/test/output/test_linuxaudiodev-ping
test test_linuxaudiodev-ping crashed -- audio format not supported by linuxaudiodev: None
1 test failed: test_linuxaudiodev-ping

...no sound.

./python Lib/test/regrtest.py test_linuxaudiodev-hudson
test_linuxaudiodev-hudson
Warning: can't open Lib/test/output/test_linuxaudiodev-hudson
test test_linuxaudiodev-hudson crashed -- linuxaudiodev.error: (11, 'Resource temporarily unavailable')
1 test failed: test_linuxaudiodev-hudson

...this is the oddest one of all: I get the "crashed" message
immediately, but then the sound starts playing.  I hear "Nobody expects
the Spani---" but then it stops, the test script terminates, and I get
the "1 test failed" message and my shell prompt back.

Confused as hell, and completely ignorant of computer audio,

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From nascheme at enme.ucalgary.ca  Fri Sep  1 17:56:27 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 09:56:27 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14767.50498.896689.445018@beluga.mojam.com>; from Skip Montanaro on Fri, Sep 01, 2000 at 10:03:30AM -0500
References: <14767.50498.896689.445018@beluga.mojam.com>
Message-ID: <20000901095627.B5571@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 10:03:30AM -0500, Skip Montanaro wrote:
> Neil sent me a patch a week or two ago that implemented a DEBUG_SAVEALL flag
> for the gc module.

I didn't submit the patch to SF yet because I am thinking of redesigning
the gc module API.  I really don't like the current bitmask interface
for setting options.  The redesign could wait for 2.1 but it would be
nice to not have to change a published API.

Does anyone have any ideas on a good interface for setting various GC
options?  There may be many options and they may change with the
evolution of the collector.  My current idea is to use something like:

    gc.get_option(<name>)

    gc.set_option(<name>, <value>, ...)

with the module defining constants for options.  For example:

    gc.set_option(gc.DEBUG_LEAK, 1)

would enable leak debugging.  Does this look okay?  Should I try to get
it done for 2.0?

  Neil



From guido at beopen.com  Fri Sep  1 19:05:21 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 12:05:21 -0500
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: Your message of "Fri, 01 Sep 2000 16:34:52 +0200."
             <20000901163452.N12695@xs4all.nl> 
References: <20000901163452.N12695@xs4all.nl> 
Message-ID: <200009011705.MAA10274@cj20424-a.reston1.va.home.com>

> Works, too. I had a funny experience, though. I tried to quit the
> interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.

Really?  It didn't exit?  What had you done before?  I do this all the
time without problems.

> And then I started IDLE, and IDLE started up, the menus worked, I could open
> a new window, but I couldn't type anything. And then I had a bluescreen. But
> after the reboot, everything worked fine, even doing the exact same things.
> 
> Could just be windows crashing on me, it does that often enough, even on
> freshly installed machines. Something about bad karma or something ;)

Well, Fredrik Lundh also had some blue screens which he'd reduced to a
DECREF of NULL in _tkinter.  Buyt not fixed, so this may still be
lurking.

On the other hand your laptop might have been screwy already by that
time...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep  1 19:10:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 12:10:35 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.50,2.51
In-Reply-To: Your message of "Fri, 01 Sep 2000 09:54:09 +0200."
             <20000901095408.M12695@xs4all.nl> 
References: <200009010239.TAA27288@slayer.i.sourceforge.net>  
            <20000901095408.M12695@xs4all.nl> 
Message-ID: <200009011710.MAA10327@cj20424-a.reston1.va.home.com>

> On Thu, Aug 31, 2000 at 07:39:03PM -0700, Guido van Rossum wrote:
> 
> > Add parens suggested by gcc -Wall.

Thomas replied:

> No! This groups the checks wrong. HASINPLACE(v) *has* to be true for any of
> the other tests to happen. I apologize for botching the earlier 2 versions
> and failing to check them, I've been a bit swamped in work the past week :P
> I've checked them in the way they should be. (And checked, with gcc -Wall,
> this time. The error is really gone.)

Doh!  Good catch.  But after looking at the code, I understand why
it's so hard to get right: it's indented wrong, and it's got very
convoluted logic.

Suggestion: don't try to put so much stuff in a single if expression!
I find the version below much clearer, even though it may test for
f==NULL a few extra times.  Thomas, can you verify that I haven't
changed the semantics this time?  You can check it in if you like it,
or you can have me check it in.

PyObject *
PyNumber_InPlaceAdd(PyObject *v, PyObject *w)
{
	PyObject * (*f)(PyObject *, PyObject *) = NULL;
	PyObject *x;

	if (PyInstance_Check(v)) {
		if (PyInstance_HalfBinOp(v, w, "__iadd__", &x,
					 PyNumber_Add, 0) <= 0)
			return x;
	}
	else if (HASINPLACE(v)) {
		if (v->ob_type->tp_as_sequence != NULL)
			f = v->ob_type->tp_as_sequence->sq_inplace_concat;
		if (f == NULL && v->ob_type->tp_as_number != NULL)
			f = v->ob_type->tp_as_number->nb_inplace_add;
		if (f != NULL)
			return (*f)(v, w);
	}

	BINOP(v, w, "__add__", "__radd__", PyNumber_Add);

	if (v->ob_type->tp_as_sequence != NULL) {
		f = v->ob_type->tp_as_sequence->sq_concat;
		if (f != NULL)
			return (*f)(v, w);
	}
	if (v->ob_type->tp_as_number != NULL) {
		if (PyNumber_Coerce(&v, &w) != 0)
			return NULL;
		if (v->ob_type->tp_as_number != NULL) {
			f = v->ob_type->tp_as_number->nb_add;
			if (f != NULL)
				x = (*f)(v, w);
		}
		Py_DECREF(v);
		Py_DECREF(w);
		if (f != NULL)
			return x;
	}

	return type_error("bad operand type(s) for +=");
}

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Fri Sep  1 18:23:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 18:23:01 +0200
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
References: <14767.47507.843792.223790@beluga.mojam.com> <20000901114201.B5855@kronos.cnri.reston.va.us>
Message-ID: <39AFD7E5.93C0F437@lemburg.com>

Andrew Kuchling wrote:
> 
> On Fri, Sep 01, 2000 at 09:13:39AM -0500, Skip Montanaro wrote:
> >leak.  In working my way through some compilation errors I noticed that
> >Zope's cPickle.c appears to be somewhat different than Python's version.
> >(Haven't checked cStringIO.c yet, but I imagine there may be a couple
> >differences there as well.)
> 
> There are also diffs in cStringIO.c, though not ones that affect
> functionality: ANSI-fication, and a few changes to the Python API
> (PyObject_Length -> PyObject_Size, PyObject_NEW -> PyObject_New, &c).
> 
> The cPickle.c changes look to be:
>     * ANSIfication.
>     * API changes.
>     * Support for Unicode strings.

Huh ? There is support for Unicode objects in Python's cPickle.c...
does Zope's version do something different ?
 
> The API changes are the most annoying ones, since you need to add
> #ifdefs in order for the module to compile with both 1.5.2 and 2.0.
> (Might be worth seeing if this can be alleviated with a few strategic
> macros, though I think not...)
> 
> --amk
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From skip at mojam.com  Fri Sep  1 18:48:14 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 11:48:14 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <20000901114201.B5855@kronos.cnri.reston.va.us>
References: <14767.47507.843792.223790@beluga.mojam.com>
	<20000901114201.B5855@kronos.cnri.reston.va.us>
Message-ID: <14767.56782.649516.231305@beluga.mojam.com>

    amk> There are also diffs in cStringIO.c, though not ones that affect
    amk> functionality: ...

    amk> The API changes are the most annoying ones, since you need to add
    amk> #ifdefs in order for the module to compile with both 1.5.2 and 2.0.

After posting my note I compared the Zope and Py2.0 versions of cPickle.c.
There are enough differences (ANISfication, gc, unicode support) that it
appears not worthwhile to try and get Python 2.0's cPickle to run under
1.5.2 and 2.0.  I tried simply commenting out the relevant lines in Zope's
lib/Components/Setup file.  Zope built fine without them, though I haven't
yet had a chance to test that configuration.  I don't use either cPickle or
cStringIO, nor do I actually use much of Zope, just ZServer and
DocumentTemplates, so I doubt my code would exercise either module heavily.


Skip




From loewis at informatik.hu-berlin.de  Fri Sep  1 19:02:58 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Fri, 1 Sep 2000 19:02:58 +0200 (MET DST)
Subject: [Python-Dev] DEBUG_SAVEALL feature for gc not in 2.0b1?
Message-ID: <200009011702.TAA26607@pandora.informatik.hu-berlin.de>

> Does this look okay?  Should I try to get it done for 2.0?

I don't see the need for improvement. I consider it a fairly low-level
API, so having bit masks is fine: users dealing with this settings
should know what a bit mask is.

As for the naming of the specific flags: So far, all of them are for
debugging, as would be the proposed DEBUG_SAVEALL. You also have
set/get_threshold, which clearly controls a different kind of setting.

Unless you come up with ten or so additional settings that *must* be
there, I don't see the need for generalizing the API. Why is

  gc.set_option(gc.THRESHOLD, 1000, 100, 10)

so much better than

  gc.set_threshold(1000, 100, 10)

???

Even if you find the need for a better API, it should be possible to
support the current one for a couple more years, no?

Martin




From skip at mojam.com  Fri Sep  1 19:24:58 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 12:24:58 -0500 (CDT)
Subject: [Python-Dev] cPickle.c out-of-date w.r.t. version in Zope 2.2.1
In-Reply-To: <39AFD7E5.93C0F437@lemburg.com>
References: <14767.47507.843792.223790@beluga.mojam.com>
	<20000901114201.B5855@kronos.cnri.reston.va.us>
	<39AFD7E5.93C0F437@lemburg.com>
Message-ID: <14767.58986.387449.850867@beluga.mojam.com>

    >> The cPickle.c changes look to be:
    >> * ANSIfication.
    >> * API changes.
    >> * Support for Unicode strings.

    MAL> Huh ? There is support for Unicode objects in Python's cPickle.c...
    MAL> does Zope's version do something different ?

Zope is still running 1.5.2 and thus has a version of cPickle that is at
least that old.  The RCS revision string is

     * $Id: cPickle.c,v 1.72 2000/05/09 18:05:09 jim Exp $

I saw new unicode functions in the Python 2.0 version of cPickle that
weren't in the version distributed with Zope 2.2.1.  Here's a grep buffer
from XEmacs:

    cd /home/dolphin/skip/src/Zope/lib/Components/cPickle/
    grep -n -i unicode cPickle.c /dev/null

    grep finished with no matches found at Fri Sep  1 12:39:57

Skip



From mal at lemburg.com  Fri Sep  1 19:36:17 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 19:36:17 +0200
Subject: [Python-Dev] Verbosity of the Makefile
Message-ID: <39AFE911.927AEDDF@lemburg.com>

This is pure cosmetics, but I found that the latest CVS versions
of the Parser Makefile have become somewhat verbose.

Is this really needed ?

Also, I'd suggest adding a line

.SILENT:

to the top-level Makefile to make possible errors more visible
(without the parser messages the Makefile messages for a clean
run fit on a 25-line display).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Fri Sep  1 19:54:16 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 1 Sep 2000 13:54:16 -0400 (EDT)
Subject: [Python-Dev] Re: Cookie.py security
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<20000830145152.A24581@illuminatus.timo-tasi.org>
Message-ID: <14767.60744.647516.232634@anthem.concentric.net>

>>>>> "timo" ==   <timo at timo-tasi.org> writes:

    timo> Right now, the shortcut 'Cookie.Cookie()' returns an
    timo> instance of the SmartCookie, which uses Pickle.  Most extant
    timo> examples of using the Cookie module use this shortcut.

    timo> We could change 'Cookie.Cookie()' to return an instance of
    timo> SimpleCookie, which does not use Pickle.  Unfortunately,
    timo> this may break existing code (like Mailman), but there is a
    timo> lot of code out there that it won't break.

Not any more!  Around the Mailman 2.0beta5 time frame, I completely
revamped Mailman's cookie stuff because lots of people were having
problems.  One of the things I suspected was that the binary data in
cookies was giving some browsers headaches.  So I took great pains to
make sure that Mailman only passed in carefully crafted string data,
avoiding Cookie.py's pickle stuff.

I use marshal in the application code, and I go further to `hexlify'
the marshaled data (see binascii.hexlify() in Python 2.0).  That way,
I'm further guaranteed that the cookie data will consist only of
characters in the set [0-9A-F], and I don't need to quote the data
(which was another source of browser incompatibility).  I don't think
I've seen any cookie problems reported from people using Mailman
2.0b5.

[Side note: I also changed Mailman to use session cookies by default,
but that probably had no effect on the problems.]

[Side side note: I also had to patch Morsel.OutputString() in my copy
of Cookie.py because there was a test for falseness that should have
been a test for the empty string explicitly.  Otherwise this fails:

    c['foo']['max-age'] = 0

but this succeeds

    c['foo']['max-age'] = "0"

Don't know if that's relevant for Tim's current version.]

    timo> Also, people could still use the SmartCookie and
    timo> SerialCookie classes, but not they would be more likely to
    timo> read them in the documentation because they are "outside the
    timo> beaten path".

My vote would be to get rid of SmartCookie and SerialCookie and stay
with simple string cookie data only.  Applications can do fancier
stuff on their own if they want.

-Barry



From thomas at xs4all.net  Fri Sep  1 20:00:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 20:00:49 +0200
Subject: [Python-Dev] Prerelease Python fun on Windows!
In-Reply-To: <200009011705.MAA10274@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Sep 01, 2000 at 12:05:21PM -0500
References: <20000901163452.N12695@xs4all.nl> <200009011705.MAA10274@cj20424-a.reston1.va.home.com>
Message-ID: <20000901200049.L477@xs4all.nl>

On Fri, Sep 01, 2000 at 12:05:21PM -0500, Guido van Rossum wrote:
> > Works, too. I had a funny experience, though. I tried to quit the
> > interpreter, which I'd started from a DOS box, using ^Z. And it didn't exit.

> Really?  It didn't exit?  What had you done before?  I do this all the
> time without problems.

I remember doing 'dir()' and that's it... probably hit a few cursorkeys out
of habit. I was discussing something with a ^@#$*(*#%* suit (the
not-very-intelligent type) and our CEO (who was very interested in the
strange windows, because he thought I was doing something with ADSL :) at the
same time, so I don't remember exactly what I did. I might have hit ^D
before ^Z, though I do remember actively thinking 'must use ^Z' while
starting python, so I don't think so.

When I did roughly the same things after a reboot, all seemed fine. And
yes, I did reboot after installing, before trying things the first time.

> > And then I started IDLE, and IDLE started up, the menus worked, I could open
> > a new window, but I couldn't type anything. And then I had a bluescreen. But
> > after the reboot, everything worked fine, even doing the exact same things.
> > 
> > Could just be windows crashing on me, it does that often enough, even on
> > freshly installed machines. Something about bad karma or something ;)

> Well, Fredrik Lundh also had some blue screens which he'd reduced to a
> DECREF of NULL in _tkinter.  Buyt not fixed, so this may still be
> lurking.

The bluescreen came after my entire explorer froze up, so I'm not sure if it
has to do with python crashing. I found it particularly weird that my
'python' interpreter wouldn't exit, and the IDLE windows were working (ie,
Tk working) but not accepting input -- they shouldn't interfere with each
other, should they ?

My laptop is reasonably stable, though somethines has some strange glitches
when viewing avi/mpeg's, in particular DVD uhm, 'backups'. But I'm used to
Windows crashing whenever I touch it, so all in all, I think this:

> On the other hand your laptop might have been screwy already by that
> time...

Since all was fine after a reboot, even doing roughly the same things. I'll
see if I can hit it again sometime this weekend. (A full weekend of Python
and Packing ! No work ! Yes!) And I'll do my girl a favor and install
PySol, so she can give it a good testing :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Sep  1 21:34:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 14:34:33 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 19:36:17 +0200."
             <39AFE911.927AEDDF@lemburg.com> 
References: <39AFE911.927AEDDF@lemburg.com> 
Message-ID: <200009011934.OAA02358@cj20424-a.reston1.va.home.com>

> This is pure cosmetics, but I found that the latest CVS versions
> of the Parser Makefile have become somewhat verbose.
> 
> Is this really needed ?

Like what?  What has been added?

> Also, I'd suggest adding a line
> 
> .SILENT:
> 
> to the top-level Makefile to make possible errors more visible
> (without the parser messages the Makefile messages for a clean
> run fit on a 25-line display).

I tried this, and it's to quiet -- you don't know what's going on at
all any more.  If you like this, just say "make -s".

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Fri Sep  1 20:36:37 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 01 Sep 2000 20:36:37 +0200
Subject: [Python-Dev] Verbosity of the Makefile
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com>
Message-ID: <39AFF735.F9F3A252@lemburg.com>

Guido van Rossum wrote:
> 
> > This is pure cosmetics, but I found that the latest CVS versions
> > of the Parser Makefile have become somewhat verbose.
> >
> > Is this really needed ?
> 
> Like what?  What has been added?

I was referring to this output:

making Makefile in subdirectory Modules
Compiling (meta-) parse tree into NFA grammar
Making DFA for 'single_input' ...
Making DFA for 'file_input' ...
Making DFA for 'eval_input' ...
Making DFA for 'funcdef' ...
Making DFA for 'parameters' ...
Making DFA for 'varargslist' ...
Making DFA for 'fpdef' ...
Making DFA for 'fplist' ...
Making DFA for 'stmt' ...
Making DFA for 'simple_stmt' ...
Making DFA for 'small_stmt' ...
...
Making DFA for 'list_for' ...
Making DFA for 'list_if' ...
Adding FIRST sets ...
Writing graminit.c ...
Writing graminit.h ...
 
> > Also, I'd suggest adding a line
> >
> > .SILENT:
> >
> > to the top-level Makefile to make possible errors more visible
> > (without the parser messages the Makefile messages for a clean
> > run fit on a 25-line display).
> 
> I tried this, and it's to quiet -- you don't know what's going on at
> all any more.  If you like this, just say "make -s".

I know, that's what I have in my .aliases file... just thought
that it might be better to only see problems rather than hundreds
of OS commands.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Fri Sep  1 20:58:41 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 1 Sep 2000 20:58:41 +0200
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <39AFF735.F9F3A252@lemburg.com>; from mal@lemburg.com on Fri, Sep 01, 2000 at 08:36:37PM +0200
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com>
Message-ID: <20000901205841.O12695@xs4all.nl>

On Fri, Sep 01, 2000 at 08:36:37PM +0200, M.-A. Lemburg wrote:

> making Makefile in subdirectory Modules
> Compiling (meta-) parse tree into NFA grammar
> Making DFA for 'single_input' ...
> Making DFA for 'file_input' ...
> Making DFA for 'eval_input' ...
> Making DFA for 'funcdef' ...
> Making DFA for 'parameters' ...
> Making DFA for 'varargslist' ...
> Making DFA for 'fpdef' ...
> Making DFA for 'fplist' ...
> Making DFA for 'stmt' ...
> Making DFA for 'simple_stmt' ...
> Making DFA for 'small_stmt' ...
> ...
> Making DFA for 'list_for' ...
> Making DFA for 'list_if' ...
> Adding FIRST sets ...
> Writing graminit.c ...
> Writing graminit.h ...

How about just removing the Grammar rule in releases ? It's only useful for
people fiddling with the Grammar, and we had a lot of those fiddles in the
last few weeks. It's not really necessary to rebuild the grammar after each
reconfigure (which is basically what the Grammar does.)

Repetitively-y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Sep  1 22:11:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 15:11:02 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 20:36:37 +0200."
             <39AFF735.F9F3A252@lemburg.com> 
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com>  
            <39AFF735.F9F3A252@lemburg.com> 
Message-ID: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>

> I was referring to this output:
> 
> making Makefile in subdirectory Modules
> Compiling (meta-) parse tree into NFA grammar
> Making DFA for 'single_input' ...
> Making DFA for 'file_input' ...
> Making DFA for 'eval_input' ...
> Making DFA for 'funcdef' ...
> Making DFA for 'parameters' ...
> Making DFA for 'varargslist' ...
> Making DFA for 'fpdef' ...
> Making DFA for 'fplist' ...
> Making DFA for 'stmt' ...
> Making DFA for 'simple_stmt' ...
> Making DFA for 'small_stmt' ...
> ...
> Making DFA for 'list_for' ...
> Making DFA for 'list_if' ...
> Adding FIRST sets ...
> Writing graminit.c ...
> Writing graminit.h ...

This should only happen after "make clean" right?  If it annoys you,
we could add >/dev/null to the pgen rule.

> > > Also, I'd suggest adding a line
> > >
> > > .SILENT:
> > >
> > > to the top-level Makefile to make possible errors more visible
> > > (without the parser messages the Makefile messages for a clean
> > > run fit on a 25-line display).
> > 
> > I tried this, and it's to quiet -- you don't know what's going on at
> > all any more.  If you like this, just say "make -s".
> 
> I know, that's what I have in my .aliases file... just thought
> that it might be better to only see problems rather than hundreds
> of OS commands.

-1.  It's too silent to be a good default.  Someone who first unpacks
and builds Python and is used to building other projects would wonder
why make is "hanging" without printing anything.  I've never seen a
Makefile that had this right out of the box.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From nascheme at enme.ucalgary.ca  Fri Sep  1 22:21:36 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 14:21:36 -0600
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Fri, Sep 01, 2000 at 03:11:02PM -0500
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
Message-ID: <20000901142136.A8205@keymaster.enme.ucalgary.ca>

I'm going to pipe up again about non-recursive makefiles being a good
thing.  This is another reason.

  Neil



From guido at beopen.com  Fri Sep  1 23:48:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 16:48:02 -0500
Subject: [Python-Dev] threadmodule.c comment error? (from comp.lang.python)
In-Reply-To: Your message of "Fri, 01 Sep 2000 00:47:03 +0200."
             <00d001c0139d$7be87900$766940d5@hagrid> 
References: <00d001c0139d$7be87900$766940d5@hagrid> 
Message-ID: <200009012148.QAA08086@cj20424-a.reston1.va.home.com>

> the parse tuple string doesn't quite match the error message
> given if the 2nd argument isn't a tuple.  on the other hand, the
> args argument is initialized to NULL...

I was puzzled until I realized that you mean that error lies about
the 2nd arg being optional.

I'll remove the word "optional" from the message.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 22:58:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 22:58:50 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009011631.LAA09876@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 11:31:26 AM
Message-ID: <200009012058.WAA28061@python.inrialpes.fr>

Aha. Thanks for the explanation.

Guido van Rossum wrote:
> 
> Thanks, Marc-Andre, for pointing out that Fred's lookdict code is
> actually an improvement.

Right. I was too fast. There is some speedup due to the string
specialization. I'll post a patch to SF with some more tweaks
of this implementation. Briefly:

- do not call PyErr_Clear() systematically after PyObject_Compare();
  only if (!error_restore && PyErr_Occurred())
- defer variable initializations after common return cases
- avoid using more vars in lookdict_string + specialize string_compare()
- inline the most frequest case in PyDict_GetItem (the first item probe)

> The reason for all this is that we found that lookdict() calls
> PyObject_Compare() without checking for errors.  If there's a key that
> raises an error when compared to another key, the keys compare unequal
> and an exception is set, which may disturb an exception that the
> caller of PyDict_GetItem() might be calling.  PyDict_GetItem() is
> documented as never raising an exception.  This is actually not strong
> enough; it was actually intended to never clear an exception either.
> The potential errors from PyObject_Compare() violate this contract.
> Note that these errors are nothing new; PyObject_Compare() has been
> able to raise exceptions for a long time, e.g. from errors raised by
> __cmp__().
> 
> The first-order fix is to call PyErr_Fetch() and PyErr_restore()
> around the calls to PyObject_Compare().  This is slow (for reasons
> Vladimir points out) even though Fred was very careful to only call
> PyErr_Fetch() or PyErr_Restore() when absolutely necessary and only
> once per lookdict call.  The second-order fix therefore is Fred's
> specialization for string-keys-only dicts.
> 
> There's another problem: as fixed, lookdict needs a current thread
> state!  (Because the exception state is stored per thread.)  There are
> cases where PyDict_GetItem() is called when there's no thread state!
> The first one we found was Tim Peters' patch for _PyPclose (see
> separate message).  There may be others -- we'll have to fix these
> when we find them (probably after 2.0b1 is released but hopefully
> before 2.0 final).

Hm. Question: is it possible for the thread state to swap during
PyObject_Compare()? If it is possible, things are more complicated
than I thought...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:08:14 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:08:14 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901095627.B5571@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 09:56:27 AM
Message-ID: <200009012108.XAA28091@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> I didn't submit the patch to SF yet because I am thinking of redesigning
> the gc module API.  I really don't like the current bitmask interface
> for setting options.

Why? There's nothing wrong with it.

> 
> Does anyone have any ideas on a good interface for setting various GC
> options?  There may be many options and they may change with the
> evolution of the collector.  My current idea is to use something like:
> 
>     gc.get_option(<name>)
> 
>     gc.set_option(<name>, <value>, ...)
> 
> with the module defining constants for options.  For example:
> 
>     gc.set_option(gc.DEBUG_LEAK, 1)
> 
> would enable leak debugging.  Does this look okay?  Should I try to get
> it done for 2.0?

This is too much. Don't worry, it's perfect as is.
Also, I support the idea of exporting the collected garbage for
debugging -- haven't looked at the patch though. Is it possible
to collect it subsequently?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Sat Sep  2 00:04:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 17:04:48 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 22:58:50 +0200."
             <200009012058.WAA28061@python.inrialpes.fr> 
References: <200009012058.WAA28061@python.inrialpes.fr> 
Message-ID: <200009012204.RAA08266@cj20424-a.reston1.va.home.com>

> Right. I was too fast. There is some speedup due to the string
> specialization. I'll post a patch to SF with some more tweaks
> of this implementation. Briefly:
> 
> - do not call PyErr_Clear() systematically after PyObject_Compare();
>   only if (!error_restore && PyErr_Occurred())

What do you mean?  The lookdict code checked in already checks
PyErr_Occurrs().

> - defer variable initializations after common return cases
> - avoid using more vars in lookdict_string + specialize string_compare()
> - inline the most frequest case in PyDict_GetItem (the first item probe)

Cool.

> Hm. Question: is it possible for the thread state to swap during
> PyObject_Compare()? If it is possible, things are more complicated
> than I thought...

Doesn't matter -- it will always swap back.  It's tied to the
interpreter lock.

Now, for truly devious code dealing with the lock and thread state,
see the changes to _PyPclose() that Tim Peters just checked in...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:16:23 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:16:23 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009012204.RAA08266@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 05:04:48 PM
Message-ID: <200009012116.XAA28130@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> > Right. I was too fast. There is some speedup due to the string
> > specialization. I'll post a patch to SF with some more tweaks
> > of this implementation. Briefly:
> > 
> > - do not call PyErr_Clear() systematically after PyObject_Compare();
> >   only if (!error_restore && PyErr_Occurred())
> 
> What do you mean?  The lookdict code checked in already checks
> PyErr_Occurrs().

Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
PyErr_Occurred() is called systematically after PyObject_Compare()
and it will evaluate to true even if the error was previously fetched.

So I mean that the test for detecting whether a *new* exception is
raised by PyObject_Compare() is (!error_restore && PyErr_Occurred())
because error_restore is set only when there's a previous exception
in place (before the call to Object_Compare). And only in this case
we need to clear the new error.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nascheme at enme.ucalgary.ca  Fri Sep  1 23:36:12 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 15:36:12 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009012108.XAA28091@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Sep 01, 2000 at 11:08:14PM +0200
References: <20000901095627.B5571@keymaster.enme.ucalgary.ca> <200009012108.XAA28091@python.inrialpes.fr>
Message-ID: <20000901153612.A9121@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
> Also, I support the idea of exporting the collected garbage for
> debugging -- haven't looked at the patch though. Is it possible
> to collect it subsequently?

No.  Once objects are in gc.garbage they are back under the users
control.  How do you see things working otherwise?

  Neil



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:47:59 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:47:59 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901153612.A9121@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 03:36:12 PM
Message-ID: <200009012147.XAA28215@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
> > Also, I support the idea of exporting the collected garbage for
> > debugging -- haven't looked at the patch though. Is it possible
> > to collect it subsequently?
> 
> No.  Once objects are in gc.garbage they are back under the users
> control.  How do you see things working otherwise?

By putting them in gc.collected_garbage. The next collect() should be
able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
see any problems with this?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Sat Sep  2 00:43:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 01 Sep 2000 17:43:29 -0500
Subject: [Python-Dev] lookdict
In-Reply-To: Your message of "Fri, 01 Sep 2000 23:16:23 +0200."
             <200009012116.XAA28130@python.inrialpes.fr> 
References: <200009012116.XAA28130@python.inrialpes.fr> 
Message-ID: <200009012243.RAA08429@cj20424-a.reston1.va.home.com>

> > > - do not call PyErr_Clear() systematically after PyObject_Compare();
> > >   only if (!error_restore && PyErr_Occurred())
> > 
> > What do you mean?  The lookdict code checked in already checks
> > PyErr_Occurrs().
> 
> Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
> PyErr_Occurred() is called systematically after PyObject_Compare()
> and it will evaluate to true even if the error was previously fetched.

No, PyErr_Fetch() clears the exception!  PyErr_Restore() restores it.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  1 23:51:47 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 23:51:47 +0200 (CEST)
Subject: [Python-Dev] lookdict
In-Reply-To: <200009012243.RAA08429@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 01, 2000 05:43:29 PM
Message-ID: <200009012151.XAA28257@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> > > > - do not call PyErr_Clear() systematically after PyObject_Compare();
> > > >   only if (!error_restore && PyErr_Occurred())
> > > 
> > > What do you mean?  The lookdict code checked in already checks
> > > PyErr_Occurrs().
> > 
> > Was fast again. Actually PyErr_Clear() is called on PyErr_Occurred().
> > PyErr_Occurred() is called systematically after PyObject_Compare()
> > and it will evaluate to true even if the error was previously fetched.
> 
> No, PyErr_Fetch() clears the exception!  PyErr_Restore() restores it.

Oops, right. This saves a function call, then. Still good.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Fri Sep  1 23:53:09 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 1 Sep 2000 17:53:09 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
Message-ID: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>

As below, except the new file is

    /pub/windows/beopen-python2b1p2-20000901.exe
    5,783,115 bytes

still from anonymous FTP at python.beopen.com.  The p1 version has been
removed.

+ test_popen2 should work on Windows 2000 now (turned out that,
  as feared, MS "more" doesn't work the same way across Windows
  flavors).

+ Minor changes to the installer.

+ New LICENSE.txt and README.txt in the root of your Python
  installation.

+ Whatever other bugfixes people committed in the 8 hours since
  2b1p1 was built.

Thanks for the help so far!  We've learned that things are generally working
well, on Windows 2000 the correct one of "admin" or "non-admin" install
works & is correctly triggered by whether the user has admin privileges, and
that Thomas's Win98FE suffers infinitely more blue-screen deaths than Tim's
Win98SE ever did <wink>.

Haven't heard from anyone on Win95, Windows Me, or Windows NT yet.  And I'm
downright eager to ignore Win64 for now.

-----Original Message-----
Sent: Friday, September 01, 2000 7:35 AM
To: PythonDev; Audun.Runde at sas.com
Cc: audun at mindspring.com
Subject: [Python-Dev] Prerelease Python fun on Windows!


A prerelease of the Python2.0b1 Windows installer is now available via
anonymous FTP, from

    python.beopen.com

file

    /pub/windows/beopen-python2b1p1-20000901.exe
    5,766,988 bytes

Be sure to set FTP Binary mode before you get it.

This is not *the* release.  Indeed, the docs are still from some old
pre-beta version of Python 1.6 (sorry, Fred, but I'm really sleepy!).  What
I'm trying to test here is the installer, and the basic integrity of the
installation.  A lot has changed, and we hope all for the better.

Points of particular interest:

+ I'm running a Win98SE laptop.  The install works great for me.  How
  about NT?  2000?  95?  ME?  Win64 <shudder>?

+ For the first time ever, the Windows installer should *not* require
  adminstrator privileges under NT or 2000.  This is untested.  If you
  log in as an adminstrator, it should write Python's registry info
  under HKEY_LOCAL_MACHINE.  If not an adminstrator, it should pop up
  an informative message and write the registry info under
  HKEY_CURRENT_USER instead.  Does this work?  This prerelease includes
  a patch from Mark Hammond that makes Python look in HKCU before HKLM
  (note that that also allows users to override the HKLM settings, if
  desired).

+ Try
    python lib/test/regrtest.py

  test_socket is expected to fail if you're not on a network, or logged
  into your ISP, at the time your run the test suite.  Otherwise
  test_socket is expected to pass.  All other tests are expected to
  pass (although, as always, a number of Unix-specific tests should get
  skipped).

+ Get into a DOS-box Python, and try

      import Tkinter
      Tkinter._test()

  This installation of Python should not interfere with, or be damaged
  by, any other installation of Tcl/Tk you happen to have lying around.
  This is also the first time we're using Tcl/Tk 8.3.2, and that needs
  wider testing too.

+ If the Tkinter test worked, try IDLE!
  Start -> Programs -> Python20 -> IDLE.

+ There is no time limit on this installation.  But if you use it for
  more than 30 days, you're going to have to ask us to pay you <wink>.

windows!-it's-not-just-for-breakfast-anymore-ly y'rs  - tim



_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://www.python.org/mailman/listinfo/python-dev





From skip at mojam.com  Sat Sep  2 00:08:05 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 17:08:05 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901153612.A9121@keymaster.enme.ucalgary.ca>
References: <20000901095627.B5571@keymaster.enme.ucalgary.ca>
	<200009012108.XAA28091@python.inrialpes.fr>
	<20000901153612.A9121@keymaster.enme.ucalgary.ca>
Message-ID: <14768.10437.352066.987557@beluga.mojam.com>

>>>>> "Neil" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

    Neil> On Fri, Sep 01, 2000 at 11:08:14PM +0200, Vladimir Marangozov wrote:
    >> Also, I support the idea of exporting the collected garbage for
    >> debugging -- haven't looked at the patch though. Is it possible
    >> to collect it subsequently?

    Neil> No.  Once objects are in gc.garbage they are back under the users
    Neil> control.  How do you see things working otherwise?

Can't you just turn off gc.DEBUG_SAVEALL and reinitialize gc.garbage to []?

Skip




From nascheme at enme.ucalgary.ca  Sat Sep  2 00:10:32 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 1 Sep 2000 16:10:32 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009012147.XAA28215@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Sep 01, 2000 at 11:47:59PM +0200
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca> <200009012147.XAA28215@python.inrialpes.fr>
Message-ID: <20000901161032.B9121@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
> By putting them in gc.collected_garbage. The next collect() should be
> able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
> see any problems with this?

I don't really see the point.  If someone has set the SAVEALL flag then
they are obviously debugging a program.  I don't see much point
in the GC cleaning up this garbage.  The user can do it if they like.

I have an idea for an alternate interface.  What if there was a
gc.handle_garbage hook which could be set to a function?  The collector
would pass garbage objects to this function one at a time.  If the
function returns true then it means that the garbage was handled and the
collector should not call tp_clear.  These handlers could be chained
together like import hooks.  The default handler would simply append to
the gc.garbage list.  If a debugging flag was set then all found garbage
would be passed to this function rather than just uncollectable garbage.

Skip, would a hook like this be useful to you?

  Neil



From trentm at ActiveState.com  Sat Sep  2 00:15:13 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 1 Sep 2000 15:15:13 -0700
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Sep 01, 2000 at 05:53:09PM -0400
References: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>
Message-ID: <20000901151513.B14097@ActiveState.com>

On Fri, Sep 01, 2000 at 05:53:09PM -0400, Tim Peters wrote:
> And I'm
> downright eager to ignore Win64 for now.

Works for me!

I won't get a chance to look at this for a while.

Trent


-- 
Trent Mick
TrentM at ActiveState.com



From gward at mems-exchange.org  Sat Sep  2 02:56:47 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 1 Sep 2000 20:56:47 -0400
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <20000901142136.A8205@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Fri, Sep 01, 2000 at 02:21:36PM -0600
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com> <20000901142136.A8205@keymaster.enme.ucalgary.ca>
Message-ID: <20000901205647.A27038@ludwig.cnri.reston.va.us>

On 01 September 2000, Neil Schemenauer said:
> I'm going to pipe up again about non-recursive makefiles being a good
> thing.  This is another reason.

+1 in principle.  I suspect un-recursifying Python's build system would
be a pretty conclusive demonstration of whether the "Recursive Makefiles
Considered Harmful" thesis hold water.  Want to try to hack something
together one of these days?  (Probably not for 2.0, though.)

        Greg



From m.favas at per.dem.csiro.au  Sat Sep  2 03:15:11 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 02 Sep 2000 09:15:11 +0800
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AEBD4A.55ABED9E@per.dem.csiro.au>
		<39AE07FF.478F413@per.dem.csiro.au>
		<14766.14278.609327.610929@anthem.concentric.net>
		<39AEBD01.601F7A83@per.dem.csiro.au> <14766.59597.713039.633184@anthem.concentric.net>
Message-ID: <39B0549F.DA8D07A8@per.dem.csiro.au>

"Barry A. Warsaw" wrote:
> Thanks to a quick chat with Tim, who is always quick to grasp the meat
> of the issue, we realize we need to & 0xffffffff all the 32 bit
> unsigned ints we're reading out of the .mo files.  I'll work out a
> patch, and check it in after a test on 32-bit Linux.  Watch for it,
> and please try it out on your box.

Yep - works fine on my 64-bitter (well, it certainly passes the test
<grin>)

Mark



From skip at mojam.com  Sat Sep  2 04:03:51 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 1 Sep 2000 21:03:51 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901161032.B9121@keymaster.enme.ucalgary.ca>
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca>
	<200009012147.XAA28215@python.inrialpes.fr>
	<20000901161032.B9121@keymaster.enme.ucalgary.ca>
Message-ID: <14768.24583.622144.16075@beluga.mojam.com>

    Neil> On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
    >> By putting them in gc.collected_garbage. The next collect() should be
    >> able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
    >> see any problems with this?

    Neil> I don't really see the point.  If someone has set the SAVEALL flag
    Neil> then they are obviously debugging a program.  I don't see much
    Neil> point in the GC cleaning up this garbage.  The user can do it if
    Neil> they like.

Agreed.

    Neil> I have an idea for an alternate interface.  What if there was a
    Neil> gc.handle_garbage hook which could be set to a function?  The
    Neil> collector would pass garbage objects to this function one at a
    Neil> time.  If the function returns true then it means that the garbage
    Neil> was handled and the collector should not call tp_clear.  These
    Neil> handlers could be chained together like import hooks.  The default
    Neil> handler would simply append to the gc.garbage list.  If a
    Neil> debugging flag was set then all found garbage would be passed to
    Neil> this function rather than just uncollectable garbage.

    Neil> Skip, would a hook like this be useful to you?

Sounds too complex for my feeble brain... ;-)

What's the difference between "found garbage" and "uncollectable garbage"?
What sort of garbage are you appending to gc.garbage now?  I thought by the
very nature of your garbage collector, anything it could free was otherwise
"uncollectable".

S



From effbot at telia.com  Sat Sep  2 11:31:04 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 11:31:04 +0200
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
References: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com>
Message-ID: <007901c014c0$852eff60$766940d5@hagrid>

tim wrote:
> Thomas's Win98FE suffers infinitely more blue-screen deaths than Tim's
> Win98SE ever did <wink>.

just fyi, Tkinter seems to be extremely unstable on Win95 and
Win98FE (when shut down, the python process grabs the key-
board and hangs.  the only way to kill the process is to reboot)

the same version of Tk (wish) works just fine...

</F>




From effbot at telia.com  Sat Sep  2 13:32:31 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 13:32:31 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz> <39AF6C4C.62451C87@lemburg.com>
Message-ID: <01b201c014d1$7c081a00$766940d5@hagrid>

mal wrote:
> I gave some examples in the other pragma thread. The main
> idea behind "declare" is to define flags at compilation
> time, the encoding of string literals being one of the
> original motivations for introducing these flags:
>
> declare encoding = "latin-1"
> x = u"This text will be interpreted as Latin-1 and stored as Unicode"
>
> declare encoding = "ascii"
> y = u"This is supposed to be ASCII, but contains ??? Umlauts - error !"

-1

for sanity's sake, we should only allow a *single* encoding per
source file.  anything else is madness.

besides, the goal should be to apply the encoding to the entire
file, not just the contents of string literals.

(hint: how many editing and display environments support multiple
encodings per text file?)

</F>




From mal at lemburg.com  Sat Sep  2 16:01:15 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 02 Sep 2000 16:01:15 +0200
Subject: [Python-Dev] "declare" reserved word (was: pragma)
References: <200009010237.OAA18429@s454.cosc.canterbury.ac.nz> <39AF6C4C.62451C87@lemburg.com> <01b201c014d1$7c081a00$766940d5@hagrid>
Message-ID: <39B1082B.4C9AB44@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > I gave some examples in the other pragma thread. The main
> > idea behind "declare" is to define flags at compilation
> > time, the encoding of string literals being one of the
> > original motivations for introducing these flags:
> >
> > declare encoding = "latin-1"
> > x = u"This text will be interpreted as Latin-1 and stored as Unicode"
> >
> > declare encoding = "ascii"
> > y = u"This is supposed to be ASCII, but contains ??? Umlauts - error !"
> 
> -1

On the "declare" concept or just the above examples ?
 
> for sanity's sake, we should only allow a *single* encoding per
> source file.  anything else is madness.

Uhm, the above was meant as two *separate* examples. I completely
agree that multiple encodings per file should not be allowed
(this would be easy to implement in the compiler).
 
> besides, the goal should be to apply the encoding to the entire
> file, not just the contents of string literals.

I'm not sure this is a good idea. 

The only parts where the encoding matters are string
literals (unless I've overlooked some important detail).
All other parts which could contain non-ASCII text such as
comments are not seen by the compiler.

So all source code encodings should really be ASCII supersets
(even if just to make editing them using a plain 8-bit editor
sane).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 16:07:52 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 16:07:52 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <20000901161032.B9121@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 04:10:32 PM
Message-ID: <200009021407.QAA29710@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 11:47:59PM +0200, Vladimir Marangozov wrote:
> > By putting them in gc.collected_garbage. The next collect() should be
> > able to empty this list if the DEBUG_SAVEALL flag is not set. Do you
> > see any problems with this?
> 
> I don't really see the point.  If someone has set the SAVEALL flag then
> they are obviously debugging a program.  I don't see much point
> in the GC cleaning up this garbage.  The user can do it if they like.

The point is that we have two types of garbage: collectable and
uncollectable. Uncollectable garbage is already saved in gc.garbage
with or without debugging.

Uncollectable garbage is the most harmful. Fixing the program to
avoid that garbage is supposed to have top-ranked priority.

The discussion now goes on taking that one step further, i.e.
make sure that no cycles are created at all, ever. This is what
Skip wants. Skip wants to have access to the collectable garbage and
cleanup at best the code w.r.t. cycles. Fine, but collectable garbage
is priority 2 and mixing the two types of garbage is not nice. It is
not nice because the collector can deal with collectable garbage, but
gives up on the uncollectable one. This distinction in functionality
is important.

That's why I suggested to save the collectable garbage in gc.collected.

In this context, the name SAVEALL is a bit misleading. Uncollectable
garbage is already saved. What's missing is a flag & support to save
the collectable garbage. SAVECOLLECTED is a name on target.

Further, the collect() function should be able to clear gc.collected
if it is not empty and if SAVEUNCOLLECTED is not set. This should not
be perceived as a big deal, though. I see it as a nicety for overall
consistency.

> 
> I have an idea for an alternate interface.  What if there was a
> gc.handle_garbage hook which could be set to a function?  The collector
> would pass garbage objects to this function one at a time.

This is too much. The idea here is to detect garbage earlier, but given
that one can set gc.threshold(1,0,0), thus invoking the collector on
every allocation, one gets the same effect with DEBUG_LEAK. There's
little to no added value.

Such hook may also exercise the latest changes Jeremy checked in:
if an exception is raised after GC, Python will scream at you with
a fatal error. I don't think it's a good idea to mix Python and C too
much for such a low-level machinery as the garbage collector.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nascheme at enme.ucalgary.ca  Sat Sep  2 16:08:48 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 2 Sep 2000 08:08:48 -0600
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14768.24583.622144.16075@beluga.mojam.com>; from Skip Montanaro on Fri, Sep 01, 2000 at 09:03:51PM -0500
References: <20000901153612.A9121@keymaster.enme.ucalgary.ca> <200009012147.XAA28215@python.inrialpes.fr> <20000901161032.B9121@keymaster.enme.ucalgary.ca> <14768.24583.622144.16075@beluga.mojam.com>
Message-ID: <20000902080848.A13169@keymaster.enme.ucalgary.ca>

On Fri, Sep 01, 2000 at 09:03:51PM -0500, Skip Montanaro wrote:
> What's the difference between "found garbage" and "uncollectable garbage"?

I use the term uncollectable garbage for objects that the collector
cannot call tp_clear on because of __del__ methods.  These objects are
added to gc.garbage (actually, just the instances).  If SAVEALL is
enabled then all objects found are saved in gc.garbage and tp_clear is
not called.

Here is an example of how to use my proposed handle_garbage hook:

	class Vertex:
		def __init__(self):
			self.edges = []
		def add_edge(self, e):
			self.edges.append(e)
		def __del__(self):
			do_something()

	class Edge:
		def __init__(self, vertex_in, vertex_out):
			self.vertex_in = vertex_in
			vertex_in.add_edget(self)
			self.vertex_out = vertex_out
			vertex_out.add_edget(self)
			
This graph structure contains cycles and will not be collected by
reference counting.  It is also "uncollectable" because it contains a
finalizer on a strongly connected component (ie. other objects in the
cycle are reachable from the __del__ method).  With the current garbage
collector, instances of Edge and Vertex will appear in gc.garbage when
found to be unreachable by the rest of Python.  The application could
then periodicly do:

	for obj in gc.garbage:
		if isinstance(obj, Vertex):
			obj.__dict__.clear()

which would break the reference cycles.  If a handle_garbage hook
existed the application could do:

	def break_graph_cycle(obj, next=gc.handle_garbage):
		if isinstance(obj, Vertex):
			obj.__dict__.clear()
			return 1
		else:
			return next(obj)
	gc.handle_garbage = break_graph_cycle

If you had a leaking program you could use this hook to debug it:

	def debug_cycle(obj, next=gc.handle_garbage):
		print "garbage:", repr(obj)
		return gc.handle_garbage

The hook seems to be more general than the gc.garbage list.

  Neil



	





> What sort of garbage are you appending to gc.garbage now?  I thought by the
> very nature of your garbage collector, anything it could free was otherwise
> "uncollectable".
> 
> S



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 16:37:18 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 16:37:18 +0200 (CEST)
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <20000901094821.A5571@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Sep 01, 2000 09:48:21 AM
Message-ID: <200009021437.QAA29774@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
> > Even people who do have problems with cyclic garbage don't necessarily
> > need a collection every 100 allocations.  (Is my understanding of what
> > the threshold measures correct?)
> 
> It collects every net threshold0 allocations.  If you create and delete
> 1000 container objects in a loop then no collection would occur.
> 
> > But the difference in total memory consumption with the threshold at
> > 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.

A few megabytes?  Phew! Jeremy -- more power mem to you!
I agree with Neil. 5000 is too high and the purpose of the inclusion
of the collector in the beta is precisely to exercise it & get feedback!
With a threshold of 5000 you've almost disabled the collector, leaving us
only with the memory overhead and the slowdown <wink>.

In short, bring it back to something low, please.

[Neil]
> A portable way to find the total allocated memory would be nice.
> Perhaps Vladimir's malloc will help us here.

Yep, the mem profiler. The profiler currently collects stats if
enabled. This is slow and unusable in production code. But if the
profiler is disabled, Python runs at full speed. However, the profiler
will include an interface which will ask the mallocs on how much real
mem they manage. This is not implemented yet... Maybe the real mem
interface should go in a separate 'memory' module; don't know yet.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 17:00:47 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 17:00:47 +0200 (CEST)
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEIDHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 01, 2000 05:53:09 PM
Message-ID: <200009021500.RAA00776@python.inrialpes.fr>

Tim Peters wrote:
> 
> As below, except the new file is
> 
>     /pub/windows/beopen-python2b1p2-20000901.exe
>     5,783,115 bytes
> 
> still from anonymous FTP at python.beopen.com.  The p1 version has been
> removed.

In case my feedback matters, being a Windows amateur, the installation
went smoothly on my home P100 with some early Win95 pre-release. In the
great Windows tradition, I was asked to reboot & did so. The regression
tests passed in console mode. Then launched successfully IDLE. In IDLE
I get *beep* sounds every time I hit RETURN without typing anything.
I was able to close both the console and IDLE without problems. Haven't
tried the uninstall link, though.

don't-ask-me-any-questions-about-Windows'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Sat Sep  2 17:56:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 10:56:30 -0500
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: Your message of "Fri, 01 Sep 2000 20:56:47 -0400."
             <20000901205647.A27038@ludwig.cnri.reston.va.us> 
References: <39AFE911.927AEDDF@lemburg.com> <200009011934.OAA02358@cj20424-a.reston1.va.home.com> <39AFF735.F9F3A252@lemburg.com> <200009012011.PAA02974@cj20424-a.reston1.va.home.com> <20000901142136.A8205@keymaster.enme.ucalgary.ca>  
            <20000901205647.A27038@ludwig.cnri.reston.va.us> 
Message-ID: <200009021556.KAA02142@cj20424-a.reston1.va.home.com>

> On 01 September 2000, Neil Schemenauer said:
> > I'm going to pipe up again about non-recursive makefiles being a good
> > thing.  This is another reason.

Greg Ward:
> +1 in principle.  I suspect un-recursifying Python's build system would
> be a pretty conclusive demonstration of whether the "Recursive Makefiles
> Considered Harmful" thesis hold water.  Want to try to hack something
> together one of these days?  (Probably not for 2.0, though.)

To me this seems like a big waste of time.

I see nothing broken with the current setup.  The verbosity is taken
care of by "make -s", for individuals who don't want Make saying
anything.  Another useful option is "make --no-print-directory"; this
removes Make's noisiness about changing directories.  If the pgen
output really bothers you, then let's direct it to /dev/null.  None of
these issues seem to require getting rid of the Makefile recursion.

If it ain't broken, don't fix it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sat Sep  2 18:00:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 11:00:29 -0500
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: Your message of "Sat, 02 Sep 2000 17:00:47 +0200."
             <200009021500.RAA00776@python.inrialpes.fr> 
References: <200009021500.RAA00776@python.inrialpes.fr> 
Message-ID: <200009021600.LAA02199@cj20424-a.reston1.va.home.com>

[Vladimir]

> In IDLE I get *beep* sounds every time I hit RETURN without typing
> anything.

This appears to be a weird side effect of the last change I made in
IDLE:

----------------------------
revision 1.28
date: 2000/03/07 18:51:49;  author: guido;  state: Exp;  lines: +24 -0
Override the Undo delegator to forbid any changes before the I/O mark.
It beeps if you try to insert or delete before the "iomark" mark.
This makes the shell less confusing for newbies.
----------------------------

I hope we can fix this before 2.0 final goes out...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Sat Sep  2 17:09:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 10:09:49 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021407.QAA29710@python.inrialpes.fr>
References: <20000901161032.B9121@keymaster.enme.ucalgary.ca>
	<200009021407.QAA29710@python.inrialpes.fr>
Message-ID: <14769.6205.428574.926100@beluga.mojam.com>

    Vlad> The discussion now goes on taking that one step further, i.e.
    Vlad> make sure that no cycles are created at all, ever. This is what
    Vlad> Skip wants. Skip wants to have access to the collectable garbage
    Vlad> and cleanup at best the code w.r.t. cycles. 

If I read my (patched) version of gcmodule.c correctly, with the
gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not just
the stuff with __del__ methods.  In delete_garbage I see

    if (debug & DEBUG_SAVEALL) {
	    PyList_Append(garbage, op);
    } else {
            ... usual collection business here ...
    }

Skip



From Vladimir.Marangozov at inrialpes.fr  Sat Sep  2 17:43:05 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 2 Sep 2000 17:43:05 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14769.6205.428574.926100@beluga.mojam.com> from "Skip Montanaro" at Sep 02, 2000 10:09:49 AM
Message-ID: <200009021543.RAA01638@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> If I read my (patched) version of gcmodule.c correctly, with the
> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not just
> the stuff with __del__ methods.

Yes. And you don't know which objects are collectable and which ones
are not by this collector. That is, SAVEALL transforms the collector
in a cycle detector. The collectable and uncollectable objects belong
to two disjoint sets. I was arguing about this distinction, because
collectable garbage is not considered garbage any more, uncollectable
garbage is the real garbage left, but if you think this distinction
doesn't serve you any purpose, fine.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Sat Sep  2 18:05:33 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 18:05:33 +0200
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
Message-ID: <029001c014f7$a203a780$766940d5@hagrid>

paul prescod spotted this discrepancy:

from the documentation:

    start ([group]) 
    end ([group]) 
        Return the indices of the start and end of the
        substring matched by group; group defaults to
        zero (meaning the whole matched substring). Return
        None if group exists but did not contribute to the
        match.

however, it turns out that PCRE doesn't do what it's
supposed to:

>>> import pre
>>> m = pre.match("(a)|(b)", "b")
>>> m.start(1)
-1

unlike SRE:

>>> import sre
>>> m = sre.match("(a)|(b)", "b")
>>> m.start(1)
>>> print m.start(1)
None

this difference breaks 1.6's pyclbr (1.5.2's pyclbr works
just fine with SRE, though...)

:::

should I fix SRE and ask Fred to fix the docs, or should
someone fix pyclbr and maybe even PCRE?

</F>




From guido at beopen.com  Sat Sep  2 19:18:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 02 Sep 2000 12:18:48 -0500
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
In-Reply-To: Your message of "Sat, 02 Sep 2000 18:05:33 +0200."
             <029001c014f7$a203a780$766940d5@hagrid> 
References: <029001c014f7$a203a780$766940d5@hagrid> 
Message-ID: <200009021718.MAA02318@cj20424-a.reston1.va.home.com>

> paul prescod spotted this discrepancy:
> 
> from the documentation:
> 
>     start ([group]) 
>     end ([group]) 
>         Return the indices of the start and end of the
>         substring matched by group; group defaults to
>         zero (meaning the whole matched substring). Return
>         None if group exists but did not contribute to the
>         match.
> 
> however, it turns out that PCRE doesn't do what it's
> supposed to:
> 
> >>> import pre
> >>> m = pre.match("(a)|(b)", "b")
> >>> m.start(1)
> -1
> 
> unlike SRE:
> 
> >>> import sre
> >>> m = sre.match("(a)|(b)", "b")
> >>> m.start(1)
> >>> print m.start(1)
> None
> 
> this difference breaks 1.6's pyclbr (1.5.2's pyclbr works
> just fine with SRE, though...)
> 
> :::
> 
> should I fix SRE and ask Fred to fix the docs, or should
> someone fix pyclbr and maybe even PCRE?

I'd suggest fix SRE and the docs, because -1 is a more useful
indicator for "no match" than None: it has the same type as valid
indices.  It makes it easier to adapt to static typing later.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Sat Sep  2 18:54:57 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 2 Sep 2000 18:54:57 +0200
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
References: <029001c014f7$a203a780$766940d5@hagrid>  <200009021718.MAA02318@cj20424-a.reston1.va.home.com>
Message-ID: <02d501c014fe$88aa8860$766940d5@hagrid>

[me]
> > from the documentation:
> > 
> >     start ([group]) 
> >     end ([group]) 
> >         Return the indices of the start and end of the
> >         substring matched by group; group defaults to
> >         zero (meaning the whole matched substring). Return
> >         None if group exists but did not contribute to the
> >         match.
> > 
> > however, it turns out that PCRE doesn't do what it's
> > supposed to:
> > 
> > >>> import pre
> > >>> m = pre.match("(a)|(b)", "b")
> > >>> m.start(1)
> > -1

[guido]
> I'd suggest fix SRE and the docs, because -1 is a more useful
> indicator for "no match" than None: it has the same type as valid
> indices.  It makes it easier to adapt to static typing later.

sounds reasonable.  I've fixed the code, leaving the docs to Fred.

this should probably go into 1.6 as well, since pyclbr depends on
it (well, I assume it does -- the pyclbr in the current repository
does, but maybe it's only been updated in the 2.0 code base?)

</F>




From jeremy at beopen.com  Sat Sep  2 19:33:47 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Sat, 2 Sep 2000 13:33:47 -0400
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <200009021437.QAA29774@python.inrialpes.fr>
Message-ID: <AJEAKILOCCJMDILAPGJNEEKFCBAA.jeremy@beopen.com>

Vladimir Marangozov wrote:
>Neil Schemenauer wrote:
>>
>> On Fri, Sep 01, 2000 at 10:24:46AM -0400, Jeremy Hylton wrote:
>> > Even people who do have problems with cyclic garbage don't necessarily
>> > need a collection every 100 allocations.  (Is my understanding of what
>> > the threshold measures correct?)
>>
>> It collects every net threshold0 allocations.  If you create and delete
>> 1000 container objects in a loop then no collection would occur.
>>
>> > But the difference in total memory consumption with the threshold at
>> > 100 vs. 1000 vs. 5000 is not all that noticable, a few MB.
>
>A few megabytes?  Phew! Jeremy -- more power mem to you!
>I agree with Neil. 5000 is too high and the purpose of the inclusion
>of the collector in the beta is precisely to exercise it & get feedback!
>With a threshold of 5000 you've almost disabled the collector, leaving us
>only with the memory overhead and the slowdown <wink>.
>
>In short, bring it back to something low, please.

I am happy to bring it to a lower number, but not as low as it was.  I
increased it forgetting that it was net allocations and not simply
allocations.  Of course, it's not exactly net allocations because if
deallocations occur while the count is zero, they are ignored.

My reason for disliking the previous lower threshold is that it causes
frequently collections, even in programs that produce no cyclic garbage.  I
understand the garbage collector to be a supplement to the existing
reference counting mechanism, which we expect to work correctly for most
programs.

The benefit of collecting the cyclic garbage periodically is to reduce the
total amount of memory the process uses, by freeing some memory to be reused
by malloc.  The specific effect on process memory depends on the program's
high-water mark for memory use and how much of that memory is consumed by
cyclic trash.  (GC also allows finalization to occur where it might not have
before.)

In one test I did, the difference between the high-water mark for a program
that run with 3000 GC collections and 300 GC collections was 13MB and 11MB,
a little less than 20%.

The old threshold (100 net allocations) was low enough that most scripts run
several collections during compilation of the bytecode.  The only containers
created during compilation (or loading .pyc files) are the dictionaries that
hold constants.  If the GC is supplemental, I don't believe its threshold
should be set so low that it runs long before any cycles could be created.

The default threshold can be fairly high, because a program that has
problems caused by cyclic trash can set the threshold lower or explicitly
call the collector.  If we assume these programs are less common, there is
no reason to make all programs suffer all of the time.

I have trouble reasoning about the behavior of the pseudo-net allocations
count, but think I would be happier with a higher threshold.  I might find
it easier to understand if the count where of total allocations and
deallocations, with GC occurring every N allocation events.

Any suggestions about what a more reasonable value would be and why it is
reasonable?

Jeremy





From skip at mojam.com  Sat Sep  2 19:43:06 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 12:43:06 -0500 (CDT)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021543.RAA01638@python.inrialpes.fr>
References: <14769.6205.428574.926100@beluga.mojam.com>
	<200009021543.RAA01638@python.inrialpes.fr>
Message-ID: <14769.15402.630192.4454@beluga.mojam.com>

    Vlad> Skip Montanaro wrote:
    >> 
    >> If I read my (patched) version of gcmodule.c correctly, with the
    >> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not
    >> just the stuff with __del__ methods.

    Vlad> Yes. And you don't know which objects are collectable and which
    Vlad> ones are not by this collector. That is, SAVEALL transforms the
    Vlad> collector in a cycle detector. 

Which is precisely what I want.  I'm trying to locate cycles in a
long-running program.  In that environment collectable and uncollectable
garbage are just as bad since I still use 1.5.2 in production.

Skip



From tim_one at email.msn.com  Sat Sep  2 20:20:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 14:20:18 -0400
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <AJEAKILOCCJMDILAPGJNEEKFCBAA.jeremy@beopen.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKDHDAA.tim_one@email.msn.com>

[Neil and Vladimir say a threshold of 5000 is too high!]

[Jeremy says a threshold of 100 is too low!]

[merriment ensues]

> ...
> Any suggestions about what a more reasonable value would be and why
> it is reasonable?
>
> Jeremy

There's not going to be consensus on this, as the threshold is a crude handle on a complex
problem.  That's sure better than *no* handle, but trash behavior is so app-specific that
there simply won't be a killer argument.

In cases like this, the geometric mean of the extreme positions is always the best guess
<0.8 wink>:

>>> import math
>>> math.sqrt(5000 * 100)
707.10678118654755
>>>

So 9 times out of 10 we can run it with a threshold of 707, and 1 out of 10 with 708
<wink>.

Tuning strategies for gc *can* get as complex as OS scheduling algorithms, and for the
same reasons:  you're in the business of predicting the future based on just a few neurons
keeping track of gross summaries of what happened before.  A program can go through many
phases of quite different behavior over its life (like I/O-bound vs compute-bound, or
cycle-happy vs not), and at the phase boundaries past behavior is worse than irrelevant
(it's actively misleading).

So call it 700 for now.  Or 1000.  It's a bad guess at a crude heuristic regardless, and
if we avoid extreme positions we'll probably avoid doing as much harm as we *could* do
<0.9 wink>.  Over time, a more interesting measure may be how much cyclic trash
collections actually recover, and then collect less often the less trash we're finding
(ditto more often when we're finding more).  Another is like that, except replace "trash"
with "cycles (whether trash or not)".  The gross weakness of "net container allocations"
is that it doesn't directly measure what this system was created to do.

These things *always* wind up with dynamic measures, because static ones are just too
crude across apps.  Then the dynamic measures fail at phase boundaries too, and more
gimmicks are added to compensate for that.  Etc.  Over time it will get better for most
apps most of the time.  For now, we want *both* to exercise the code in the field and not
waste too much time, so hasty compromise is good for the beta.

let-a-thousand-thresholds-bloom-ly y'rs  - tim





From tim_one at email.msn.com  Sat Sep  2 20:46:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 14:46:33 -0400
Subject: [Python-Dev] Bug #113254: pre/sre difference breaks pyclbr
In-Reply-To: <02d501c014fe$88aa8860$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEKFHDAA.tim_one@email.msn.com>

[start/end (group)  documented to return None for group that
 didn't participate in the match
 sre does this
 pre actually returned -1
 this breaks pyclbr.py
 Guido sez pre's behavior is better & the docs should be changed
]

[/F]
> sounds reasonable.  I've fixed the code, leaving the docs to Fred.
>
> this should probably go into 1.6 as well, since pyclbr depends on
> it (well, I assume it does -- the pyclbr in the current repository
> does, but maybe it's only been updated in the 2.0 code base?)

Good point.  pyclbr got changed last year, to speed it and make it more robust for IDLE's
class browser display.  Which has another curious role to play in this screwup!  When
rewriting pyclbr's parsing, I didn't remember what start(group) would do for a
non-existent group.  In the old days I would have looked up the docs.  But since I had
gotten into the habit of *living* in an IDLE box all day, I just tried it instead and
"ah! -1 ... makes sense, I'll use that" was irresistible.  Since any code relying on the
docs would not have worked (None is the wrong type, and even the wrong value viewed as
boolean), the actual behavior should indeed win here.





From cgw at fnal.gov  Sat Sep  2 17:27:53 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Sat, 2 Sep 2000 10:27:53 -0500 (CDT)
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009021556.KAA02142@cj20424-a.reston1.va.home.com>
References: <39AFE911.927AEDDF@lemburg.com>
	<200009011934.OAA02358@cj20424-a.reston1.va.home.com>
	<39AFF735.F9F3A252@lemburg.com>
	<200009012011.PAA02974@cj20424-a.reston1.va.home.com>
	<20000901142136.A8205@keymaster.enme.ucalgary.ca>
	<20000901205647.A27038@ludwig.cnri.reston.va.us>
	<200009021556.KAA02142@cj20424-a.reston1.va.home.com>
Message-ID: <14769.7289.688557.827915@buffalo.fnal.gov>

Guido van Rossum writes:

 > To me this seems like a big waste of time.
 > I see nothing broken with the current setup. 

I've built Python on every kind of system we have at FNAL, which means
Linux, several versions of Solaris, IRIX, DEC^H^H^HCompaq OSF/1, even
(shudder) WinNT, and the only complaint I've ever had with the build
system is that it doesn't do a "make depend" automatically.  (I don't
care too much about the dependencies on system headers, but the
Makefiles should at least know about the dependencies on Python's own
.h files, so when you change something like opcode.h or node.h it is
properly handled.  Fred got bitten by this when he tried to apply the
EXTENDED_ARG patch.)

Personally, I think that the "recurive Mke considered harmful" paper
is a bunch of hot air.  Many highly successful projects - the Linux
kernel, glibc, etc - use recursive Make.

 > If it ain't broken, don't fix it!

Amen!



From cgw at fnal.gov  Fri Sep  1 21:19:58 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 14:19:58 -0500 (CDT)
Subject: [Python-Dev] Verbosity of the Makefile
In-Reply-To: <200009012011.PAA02974@cj20424-a.reston1.va.home.com>
References: <39AFE911.927AEDDF@lemburg.com>
	<200009011934.OAA02358@cj20424-a.reston1.va.home.com>
	<39AFF735.F9F3A252@lemburg.com>
	<200009012011.PAA02974@cj20424-a.reston1.va.home.com>
Message-ID: <14768.350.21353.538473@buffalo.fnal.gov>

For what it's worth, lots of verbosity in the Makefile makes me happy.
But I'm a verbose sort of guy...

(Part of the reason for sending this is to test if my mail is going
through.  Looks like there's currently no route from fnal.gov to
python.org, I wonder where the problem is?)



From cgw at fnal.gov  Fri Sep  1 18:06:48 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 11:06:48 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <20000901114945.A15688@ludwig.cnri.reston.va.us>
References: <14766.50976.102853.695767@buffalo.fnal.gov>
	<Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
	<20000901114945.A15688@ludwig.cnri.reston.va.us>
Message-ID: <14767.54296.278370.953550@buffalo.fnal.gov>

Greg Ward wrote:

 > ...but the sound is horrible: various people opined on this list, many
 > months ago when I first reported the problem, that it's probably a
 > format problem.  (The wav/au mixup seems a likely candidate; it can't be
 > an endianness problem, since the .au file is 8-bit!)

Did you see the msg I sent yesterday?  (Maybe I send out too many mails)

I'm 99.9% sure it's a format problem, because if you replace
"audiotest.au" with some random ".wav" file, it works. (On my system
anyhow, with pretty generic cheapo soundblaster)

The code in test_linuxaudiodev.py has no chance of ever working
correctly, if you send mu-law encoded (i.e. logarithmic) data to a
device expecting linear, you will get noise.  You have to set the
format first. And, the functions in linuxaudiodev which are intended
to set the format don't work, and go against what is reccommended in
the OSS programming documentation.

IMHO this code is up for a complete rewrite, which I will submit post
2.0.  

The quick-and-dirty fix for the 2.0 release is to include
"audiotest.wav" and modify test_linuxaudiodev.au.


Ka-Ping Yee <ping at lfw.org> wrote:
> Are you talking about OSS vs. ALSA?  Didn't they at least try to
> keep some of the basic parts of the interface the same?

No, I'm talking about SoundBlaster8 vs. SoundBlaster16
vs. ProAudioSpectrum vs. Gravis vs. AdLib vs. TurtleBeach vs.... you
get the idea.  You can't know what formats are supported until you
probe the hardware.  Most of these cards *don't* handle logarithmic
data; and *then* depending on whether you have OSS or Alsa there may be
driver-side code to convert logarithmic data to linear before sending
it to the hardware.

The lowest-common-denominator, however, is raw 8-bit linear unsigned
data, which tends to be supported on all PC audio hardware.








From cgw at fnal.gov  Fri Sep  1 18:09:02 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 1 Sep 2000 11:09:02 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.54177.584090.198596@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
Message-ID: <14767.54430.927663.710733@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 > Makes no difference:
 > 
 >     % ulimit -a
 >     stack size (kbytes)         unlimited
 >     % ./python Misc/find_recursionlimit.py
 >     Limit of 2400 is fine
 >     repr
 >     Segmentation fault
 > 
 > Skip

This means that you're not hitting the rlimit at all but getting a
real segfault!  Time to do setrlimit -c unlimited and break out GDB,
I'd say.



From cgw at fnal.gov  Fri Sep  1 01:01:22 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 18:01:22 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
References: <14766.50976.102853.695767@buffalo.fnal.gov>
	<Pine.LNX.4.10.10008311804230.11804-100000@server1.lfw.org>
Message-ID: <14766.58306.977241.439169@buffalo.fnal.gov>

Ka-Ping Yee writes:

 > Side note: is there a well-defined platform-independent sound
 > interface we should be conforming to?  It would be nice to have a
 > single Python function for each of the following things:
 > 
 >     1. Play a .wav file given its filename.
 > 
 >     2. Play a .au file given its filename.

These may be possible.

 >     3. Play some raw audio data, given a string of bytes and a
 >        sampling rate.

This would never be possible unless you also specifed the format and
encoding of the raw data - are they 8bit, 16-bit, signed, unsigned,
bigendian, littlendian, linear, logarithmic ("mu_law"), etc?

Not only that, but some audio hardware will support some formats and
not others.  Some sound drivers will attempt to convert from a data
format which is not supported by the audio hardware to one which is,
and others will just reject the data if it's not in a format supported
by the hardware.  Trying to do anything with sound in a
platform-independent manner is near-impossible.  Even the same
"platform" (e.g. RedHat 6.2 on Intel) will behave differently
depending on what soundcard is installed.



From skip at mojam.com  Sat Sep  2 22:37:54 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 15:37:54 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14767.54430.927663.710733@buffalo.fnal.gov>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
	<14767.54430.927663.710733@buffalo.fnal.gov>
Message-ID: <14769.25890.529541.831812@beluga.mojam.com>

    >> % ulimit -a
    >> stack size (kbytes)         unlimited
    >> % ./python Misc/find_recursionlimit.py
    >> ...
    >> Limit of 2400 is fine
    >> repr
    >> Segmentation fault

    Charles> This means that you're not hitting the rlimit at all but
    Charles> getting a real segfault!  Time to do setrlimit -c unlimited and
    Charles> break out GDB, I'd say.

Running the program under gdb does no good.  It segfaults and winds up with
a corrupt stack as far as the debugger is concerned.  For some reason bash
won't let me set a core file size != 0 either:

    % ulimit -c
    0
    % ulimit -c unlimited
    % ulimit -c
    0

though I doubt letting the program dump core would be any better
debugging-wise than just running the interpreter under gdb's control.

Kinda weird.

Skip



From thomas at xs4all.net  Sat Sep  2 23:36:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 2 Sep 2000 23:36:47 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14767.54430.927663.710733@buffalo.fnal.gov>; from cgw@fnal.gov on Fri, Sep 01, 2000 at 11:09:02AM -0500
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net> <14766.53002.467504.523298@beluga.mojam.com> <14766.53381.634928.615048@buffalo.fnal.gov> <14766.54177.584090.198596@beluga.mojam.com> <14767.54430.927663.710733@buffalo.fnal.gov>
Message-ID: <20000902233647.Q12695@xs4all.nl>

On Fri, Sep 01, 2000 at 11:09:02AM -0500, Charles G Waldman wrote:
> Skip Montanaro writes:
>  > Makes no difference:

>  >     stack size (kbytes)         unlimited
>  >     % ./python Misc/find_recursionlimit.py
>  >     Limit of 2400 is fine
>  >     repr
>  >     Segmentation fault

> This means that you're not hitting the rlimit at all but getting a
> real segfault!  Time to do setrlimit -c unlimited and break out GDB,
> I'd say.

Yes, which I did (well, my girlfriend was hogging the PC with 'net
connection, and there was nothing but silly soft-porn on TV, so I spent an
hour or two on my laptop ;) and I did figure out the problem isn't
stackspace (which was already obvious) but *damned* if I know what the
problem is. 

Here's an easy way to step through the whole procedure, though. Take a
recursive script, like the one Guido posted:

    i = 0
    class C:
      def __getattr__(self, name):
          global i
          print i
          i += 1
          return self.name # common beginners' mistake

Run it once, so you get a ballpark figure on when it'll crash, and then
branch right before it would crash, calling some obscure function
(os.getpid() works nicely, very simple function.) This was about 2926 or so
on my laptop (adding the branch changed this number, oddly enough.)

    import os
    i = 0
    class C:
      def __getattr__(self, name):
          global i
          print i
          i += 1
          if (i > 2625):
              os.getpid()
          return self.name # common beginners' mistake

(I also moved the 'print i' to inside the branch, saved me a bit of
scrollin') Then start GDB on the python binary, set a breakpoint on
posix_getpid, and "run 'test.py'". You'll end up pretty close to where the
interpreter decides to go bellyup. Setting a breakpoint on ceval.c line 612
(the "opcode = NEXTOP();' line) or so at that point helps doing a
per-bytecode check, though this made me miss the actual point of failure,
and I don't fancy doing it again just yet :P What I did see, however, was
that the reason for the crash isn't the pure recursion. It looks like the
recursiveness *does* get caught properly, and the interpreter raises an
error. And then prints that error over and over again, probably once for
every call to getattr(), and eventually *that* crashes (but why, I don't
know. In one test I did, it crashed in int_print, the print function for int
objects, which did 'fprintf(fp, "%ld", v->ival);'. The actual SEGV arrived
inside fprintf's internals. v->ival was a valid integer (though a high one)
and the problem was not derefrencing 'v'. 'fp' was stderr, according to its
_fileno member.

'ltrace' (if you have it) is also a nice tool to let loose on this kind of
script, by the way, though it does make the test take a lot longer, and you
really need enough diskspace to store the output ;-P

Back-to-augassign-docs-ly y'rs,

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Sun Sep  3 00:06:41 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 3 Sep 2000 00:06:41 +0200 (CEST)
Subject: [Python-Dev] Re: ... gcmodule.c,2.9,2.10
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEKDHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 02, 2000 02:20:18 PM
Message-ID: <200009022206.AAA02255@python.inrialpes.fr>

Tim Peters wrote:
>
> There's not going to be consensus on this, as the threshold is a crude 
> handle on a complex problem.  

Hehe. Tim gets philosophic again <wink>  

>
> In cases like this, the geometric mean of the extreme positions is 
> always the best guess <0.8 wink>:
> 
> >>> import math
> >>> math.sqrt(5000 * 100)
> 707.10678118654755
> >>>
>
> So 9 times out of 10 we can run it with a threshold of 707, and 1 out of 10 
> with 708 <wink>.
> 
> Tuning strategies for gc *can* get as complex as OS scheduling algorithms, 
> and for the same reasons:  you're in the business of predicting the future 
> based on just a few neurons keeping track of gross summaries of what 
> happened before. 
> ...
> [snip]

Right on target, Tim! It is well known that the recent past is the best 
approximation of the near future and that the past as a whole is the only
approximation we have at our disposal of the long-term future. If you add 
to that axioms like "memory management schemes influence the OS long-term 
scheduler", "the 50% rule applies for all allocation strategies", etc.,
it is clear that if we want to approach the optimum, we definitely need
to adjust the collection frequency according to some proportional scheme.

But even without saying this, your argument about dynamic GC thresholds
is enough to put Neil into a state of deep depression regarding the
current GC API <0.9 wink>.

Now let's be pragmatic: it is clear that the garbage collector will
make it for 2.0 -- be it enabled or disabled by default. So let's stick
to a compromise: 500 for the beta, 1000 for the final release. This
somewhat complies to your geometric calculus which mainly aims at
balancing the expressed opinions. It certainly isn't fond regarding
any existing theory or practice, and we all realized that despite the
impressive math.sqrt() <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From cgw at alum.mit.edu  Sun Sep  3 02:52:33 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sat, 2 Sep 2000 19:52:33 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions? 
In-Reply-To: <20000902233647.Q12695@xs4all.nl> 
References: <39AEC0F4.746656E2@per.dem.csiro.au> 
                <14766.50283.758598.632542@bitdiddle.concentric.net> 
                <14766.53002.467504.523298@beluga.mojam.com> 
                <14766.53381.634928.615048@buffalo.fnal.gov> 
                <14766.54177.584090.198596@beluga.mojam.com> 
                <14767.54430.927663.710733@buffalo.fnal.gov> 
                <20000902233647.Q12695@xs4all.nl> 
Message-ID: <14769.41169.108895.723628@sirius.net.home>

I said:
 > This means that you're not hitting the rlimit at all but getting a 
 > real segfault!  Time to do setrlimit -c unlimited and break out GDB, 
 > I'd say.   
 
Thomas Wouters came back with: 
> I did figure out the problem isn't stackspace (which was already
> obvious) but *damned* if I know what the problem is.  I don't fancy
> doing it again just yet :P:P What I did see, however, was that the
> reason for the crash isn't the pure recursion. It looks like the
> recursiveness *does* get caught properly, and the interpreter raises
> an error. And then prints that error over and over again, probably
> once for every call to getattr(), and eventually *that* crashes (but
> why, I don't know. In one test I did, it crashed in int_print, the
> print function for int objects, which did 'fprintf(fp, "%ld",
> v->ival);'. The actual SEGV arrived inside fprintf's
> internals. v->ival was a valid integer (though a high one) and the
> problem was not derefrencing 'v'. 'fp' was stderr, according to its
> _fileno member.
 
I've got some more info: this crash only happens if you have built
with --enable-threads.  This brings in a different (thread-safe)
version of fprintf, which uses mutex locks on file objects so output
from different threads doesn't get scrambled together.  And the SEGV
that I saw was happening exactly where fprintf is trying to unlock the
mutex on stderr, so it can print "Maximum recursion depth exceeded".
 
This looks like more ammo for Guido's theory that there's something 
wrong with libpthread on linux, and right now I'm elbows-deep in the 
guts of libpthread trying to find out more.  Fun little project for a
Saturday night ;-)      
 
> 'ltrace' (if you have it) is also a nice tool to let loose on this
> kind of script, by the way, though it does make the test take a lot
> longer, and you really need enough diskspace to store the output ;-P
 
Sure, I've got ltrace, and also more diskspace than you really want to 
know about!

Working-at-a-place-with-lots-of-machines-can-be-fun-ly yr's,
					
					-Charles
 




From m.favas at per.dem.csiro.au  Sun Sep  3 02:53:11 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sun, 03 Sep 2000 08:53:11 +0800
Subject: [Python-Dev] failure in test_sre???
Message-ID: <39B1A0F7.D8FF0076@per.dem.csiro.au>

Is it just me, or is test_sre meant to fail, following the recent
changes to _sre.c?

Short failure message:
test test_sre failed -- Writing: 'sre.match("\\x%02x" % i, chr(i)) !=
None', expected: ''

Full failure messages:
Running tests on character literals
sre.match("\x%02x" % i, chr(i)) != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape
sre.match("\x%02x0" % i, chr(i)+"0") != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape
sre.match("\x%02xz" % i, chr(i)+"z") != None FAILED
Traceback (most recent call last):
  File "test_sre.py", line 18, in test
    r = eval(expression)
ValueError: invalid \x escape

(the above sequence is repeated another 7 times) 

-- 
Mark



From m.favas at per.dem.csiro.au  Sun Sep  3 04:05:03 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sun, 03 Sep 2000 10:05:03 +0800
Subject: [Python-Dev] Namespace collision between lib/xml and 
 site-packages/xml
References: <200009010400.XAA30273@cj20424-a.reston1.va.home.com>
Message-ID: <39B1B1CF.572955FC@per.dem.csiro.au>

Guido van Rossum wrote:
> 
> You might be able to get the old XML-sig code to override the core xml
> package by creating a symlink named _xmlplus to it in site-packages
> though.

Nope - doing this allows the imports to succeed where before they were
failing, but I get a "SAXException: No parsers found" failure now. No
big deal - I'll probably rename the xml-sig stuff and include it in my
app.

-- 
Mark



From tim_one at email.msn.com  Sun Sep  3 05:18:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 2 Sep 2000 23:18:31 -0400
Subject: [Python-Dev] failure in test_sre???
In-Reply-To: <39B1A0F7.D8FF0076@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELCHDAA.tim_one@email.msn.com>

[Mark Favas, on new test_sre failures]
> Is it just me, or is test_sre meant to fail, following the recent
> changes to _sre.c?

Checkins are never supposed to leave the test suite in a failing state, but
while that's "the rule" it's still too rarely the reality (although *much*
better than it was just a month ago -- whining works <wink>).  Offhand these
look like shallow new failures to me, related to /F's so-far partial
implemention of PEP 223 (Change the Meaning of \x Escapes).  I'll dig into a
little more.  Rest assured it will get fixed before the 2.0b1 release!

> Short failure message:
> test test_sre failed -- Writing: 'sre.match("\\x%02x" % i, chr(i)) !=
> None', expected: ''
>
> Full failure messages:
> Running tests on character literals
> sre.match("\x%02x" % i, chr(i)) != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
> sre.match("\x%02x0" % i, chr(i)+"0") != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
> sre.match("\x%02xz" % i, chr(i)+"z") != None FAILED
> Traceback (most recent call last):
>   File "test_sre.py", line 18, in test
>     r = eval(expression)
> ValueError: invalid \x escape
>
> (the above sequence is repeated another 7 times)
>
> --
> Mark
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev





From skip at mojam.com  Sun Sep  3 06:25:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 2 Sep 2000 23:25:49 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000902233647.Q12695@xs4all.nl>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
	<14767.54430.927663.710733@buffalo.fnal.gov>
	<20000902233647.Q12695@xs4all.nl>
Message-ID: <14769.53966.93066.283106@beluga.mojam.com>

    Thomas> In one test I did, it crashed in int_print, the print function
    Thomas> for int objects, which did 'fprintf(fp, "%ld", v->ival);'.  The
    Thomas> actual SEGV arrived inside fprintf's internals. v->ival was a
    Thomas> valid integer (though a high one) and the problem was not
    Thomas> derefrencing 'v'. 'fp' was stderr, according to its _fileno
    Thomas> member.

I get something similar.  The script conks out after 4491 calls (this with a
threaded interpreter).  It segfaults in _IO_vfprintf trying to print 4492 to
stdout.  All arguments to _IO_vfprintf appear valid (though I'm not quite
sure how to print the third, va_list, argument).

When I configure --without-threads, the script runs much longer, making it
past 18068.  It conks out in the same spot, however, trying to print 18069.
The fact that it occurs in the same place with and without threads (the
addresses of the two different _IO_vfprintf functions are different, which
implies different stdio libraries are active in the threading and
non-threading versions as Thomas said), suggests to me that the problem may
simply be that in the threading case each thread (even the main thread) is
limited to a much smaller stack.  Perhaps I'm seeing what I'm supposed to
see.  If the two versions were to crap out for different reasons, I doubt
I'd see them failing in the same place.

Skip





From cgw at fnal.gov  Sun Sep  3 07:34:24 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Sun, 3 Sep 2000 00:34:24 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14769.53966.93066.283106@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54177.584090.198596@beluga.mojam.com>
	<14767.54430.927663.710733@buffalo.fnal.gov>
	<20000902233647.Q12695@xs4all.nl>
	<14769.53966.93066.283106@beluga.mojam.com>
Message-ID: <14769.58081.532.747747@buffalo.fnal.gov>

Skip Montanaro writes:

 > When I configure --without-threads, the script runs much longer, making it
 > past 18068.  It conks out in the same spot, however, trying to print 18069.
 > The fact that it occurs in the same place with and without threads (the
 > addresses of the two different _IO_vfprintf functions are different, which
 > implies different stdio libraries are active in the threading and
 > non-threading versions as Thomas said), suggests to me that the problem may
 > simply be that in the threading case each thread (even the main thread) is
 > limited to a much smaller stack.  Perhaps I'm seeing what I'm supposed to
 > see.  If the two versions were to crap out for different reasons, I doubt
 > I'd see them failing in the same place.

Yes, libpthread defines it's own version of _IO_vprintf. 

Try this experiment:  do a "ulimit -a" to see what the stack size
limit is; start your Python process; find it's PID, and before you
start your test, go into another window and run the command
watch -n 0 "grep Stk /proc/<pythonpid>/status"

This will show exactly how much stack Python is using.  Then start the
runaway-recursion test.  If it craps out when the stack usage hits the
rlimit, you are seeing what you are supposed to see.  If it craps out
anytime sooner, there is a real bug of some sort, as I'm 99% sure
there is.



From thomas at xs4all.net  Sun Sep  3 09:44:51 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 3 Sep 2000 09:44:51 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14769.41169.108895.723628@sirius.net.home>; from cgw@alum.mit.edu on Sat, Sep 02, 2000 at 07:52:33PM -0500
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net> <14766.53002.467504.523298@beluga.mojam.com> <14766.53381.634928.615048@buffalo.fnal.gov> <14766.54177.584090.198596@beluga.mojam.com> <14767.54430.927663.710733@buffalo.fnal.gov> <20000902233647.Q12695@xs4all.nl> <14769.41169.108895.723628@sirius.net.home>
Message-ID: <20000903094451.R12695@xs4all.nl>

On Sat, Sep 02, 2000 at 07:52:33PM -0500, Charles G Waldman wrote:

> This looks like more ammo for Guido's theory that there's something 
> wrong with libpthread on linux, and right now I'm elbows-deep in the 
> guts of libpthread trying to find out more.  Fun little project for a
> Saturday night ;-)      

I concur that it's probably not Python-related, even if it's probably
Python-triggered (and possibly Python-induced, because of some setting or
other) -- but I think it would be very nice to work around it! And we have
roughly the same recursion limit for BSDI with a 2Mbyte stack limit, so lets
not adjust that guestimate just yet.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Sun Sep  3 10:25:38 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 04:25:38 -0400
Subject: [Python-Dev] failure in test_sre???
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELCHDAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIELOHDAA.tim_one@email.msn.com>

> [Mark Favas, on new test_sre failures]
> > Is it just me, or is test_sre meant to fail, following the recent
> > changes to _sre.c?

I just checked in a fix for this.  /F also implemented PEP 223, and it had a
surprising consequece for test_sre!  There were three test lines (in a loop,
that's why you got so many failures) of the form:

    test(r"""sre.match("\x%02x" % i, chr(i)) != None""", 1)

Note the

    "\x%02x"

part.  Before PEP 223, that "expanded" to itself:

    "\x%02x"

because the damaged \x escape was ignored.  After PEP223, it raised the

    ValueError: invalid \x escape

you kept seeing.  The fix was merely to change these 3 lines to use, e.g.,

    r"\x%02x"

instead.  Pattern strings should usually be r-strings anyway.





From Vladimir.Marangozov at inrialpes.fr  Sun Sep  3 11:21:42 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 3 Sep 2000 11:21:42 +0200 (CEST)
Subject: [Python-Dev] Copyright gag
Message-ID: <200009030921.LAA08963@python.inrialpes.fr>

Even CVS got confused about the Python's copyright <wink>

~> cvs update
...
cvs server: Updating Demo/zlib
cvs server: Updating Doc
cvs server: nothing known about Doc/COPYRIGHT
cvs server: Updating Doc/api
cvs server: Updating Doc/dist
...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Sun Sep  3 12:10:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 3 Sep 2000 12:10:01 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src LICENSE,1.1.2.7,1.1.2.8
References: <200009030228.TAA12677@slayer.i.sourceforge.net>
Message-ID: <00a501c0158f$25a5bfa0$766940d5@hagrid>

guido wrote:
> Modified Files:
>       Tag: cnri-16-start
> LICENSE 
> Log Message:
> Set a release date, now that there's agreement between
> CNRI and the FSF.

and then he wrote:

> Modified Files:
> LICENSE 
> Log Message:
> Various edits.  Most importantly, added dual licensing.  Also some
> changes suggested by BobW.

where "dual licensing" means:

    ! 3. Instead of using this License, you can redistribute and/or modify
    ! the Software under the terms of the GNU General Public License as
    ! published by the Free Software Foundation; either version 2, or (at
    ! your option) any later version.  For a copy of the GPL, see
    ! http://www.gnu.org/copyleft/gpl.html.
  
what's going on here?  what exactly does the "agreement" mean?

(I can guess, but my guess doesn't make me happy. I didn't really
think I would end up in a situation where people can take code I've
written, make minor modifications to it, and re-release it in source
form in a way that makes it impossible for me to use it...)

</F>




From guido at beopen.com  Sun Sep  3 16:03:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 09:03:46 -0500
Subject: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: Your message of "Sun, 03 Sep 2000 12:09:12 +0200."
             <00a401c0158f$24dc5520$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com>  
            <00a401c0158f$24dc5520$766940d5@hagrid> 
Message-ID: <200009031403.JAA11856@cj20424-a.reston1.va.home.com>

> bob weiner wrote:    
> > We are doing a lot of work at BeOpen with CNRI to get them to allow
> > the GPL as an alternative license across the CNRI-derived parts of the
> > codebase.  /.../  We at BeOpen want GPL-compatibility and have pushed
> > for that since we started with any Python licensing issues.

Fredrik Lundh replied:
> my understanding was that the consortium members agreed
> that GPL-compatibility was important, but that it didn't mean
> that a licensing Python under GPL was a good thing.
> 
> was dual licensing discussed on the consortium meeting?

Can't remember, probably was mentioned as one of the considered
options.  Certainly the consortium members present at the meeting in
Monterey agreed that GPL compatibility was important.

> is the consortium (and this mailing list) irrelevant in this
> discussion?

You posted a +0 for dual licensing if it was the *only* possibility to
reach GPL-compatibility for future Python licenses.  That's also my
own stance on this.

I don't believe I received any other relevant feedback.  I did see
several posts from consortium members Paul Everitt and Jim Ahlstrom,
defending the choice of law clause in the CNRI license and explaining
why the GPL is not a gret license and why a pure GPL license is
unacceptable for Python; I take these very seriously.

Bob Weiner and I talked for hours with Kahn on Friday night and
Saturday; I talked to Stallman several times on Saturday; Kahn and
Stallman talked on Saturday.  Dual licensing really was the *only* way
to reach an agreement.  So I saw no way out of the impasse except to
just do it and get it over with.

Kahn insisted that 1.6final be released before 2.0b1 and 2.0b1 be made
a derived work of 1.6final.  To show that he was serious, he shut off
our login access to python.org and threatened with legal action if we
would proceed with the 2.0b1 release as a derived work of 1.6b1.  I
don't understand why this is so important to him, but it clearly is.
I want 2.0b1 to be released (don't you?) so I put an extra effort in
to round up Stallman and make sure he and Kahn got on the phone to get
a resolution, and for a blissful few hours I believed it was all done.

Unfortunately the fat lady hasn't sung yet.

After we thought we had reached agreement, Stallman realized that
there are two interpretations of what will happen next:

    1. BeOpen releases a version for which the license is, purely and
    simply, the GPL.

    2. BeOpen releases a version which states the GPL as the license,
    and also states the CNRI license as applying with its text to part
    of the code.

His understanding of the agreement (and that of his attorney, Eben
Moglen, a law professor at NYU) was based on #1.  It appears that what
CNRI will explicitly allow BeOpen (and what the 1.6 license already
allows) is #2.  Stallman will have to get Moglen's opinion, which may
take weeks.  It's possible that they think that the BeOpen license is
still incompatible with the GPL.  In that case (assuming it happens
within a reasonable time frame, and not e.g. 5 years from now :-) we
have Kahn's agreement to go back to the negotiation table and talk to
Stallman about possible modifications to the CNRI license.  If the
license changes, we'll re-release Python 1.6 as 1.6.1 with the new
license, and we'll use that for BeOpen releases.  If dual-licensing is
no longer needed at that point I'm for taking it out again.

> > > BTW, anybody got a word from RMS on whether the "choice of law"
> > > is really the only one bugging him ?
> >
> > Yes, he has told me that was the only remaining issue.
> 
> what's the current status here?  Guido just checked in a new
> 2.0 license that doesn't match the text he posted here a few
> days ago.  Most notable, the new license says:
> 
>     3. Instead of using this License, you can redistribute and/or modify
>     the Software under the terms of the GNU General Public License as
>     published by the Free Software Foundation; either version 2, or (at
>     your option) any later version.  For a copy of the GPL, see
>     http://www.gnu.org/copyleft/gpl.html.
> 
> on the other hand, another checkin message mentions agreement
> between CNRI and the FSF.  did they agree to disagree?

I think I've explained most of this above.  I don't recall that
checkin message.  Which file?  I checked the cvs logs for README and
LICENSE for both the 1.6 and 2.0 branch.

Anyway, the status is that 1.6 final is incompatible with the GPL and
that for 2.0b1 we may or may not have GPL compatibility based on the
dual licensing clause.

I'm not too happy with the final wart.  We could do the following:
take the dual licensing clause out of 2.0b1, and promise to put it
back into 2.0final if it is still needed.  After all, it's only a
beta, and we don't *want* Debian to put 2.0b1 in their distribution,
do we?  But personally I'm of an optimistic nature; I still hope that
Moglen will find this solution acceptable and that this will be the
end of the story.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Sun Sep  3 15:36:52 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 3 Sep 2000 15:36:52 +0200
Subject: [Python-Dev] Re: Conflict with the GPL
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com>              <00a401c0158f$24dc5520$766940d5@hagrid>  <200009031403.JAA11856@cj20424-a.reston1.va.home.com>
Message-ID: <005a01c015ac$079f1c00$766940d5@hagrid>

guido wrote:

> I want 2.0b1 to be released (don't you?) so I put an extra effort in
> to round up Stallman and make sure he and Kahn got on the phone to get
> a resolution, and for a blissful few hours I believed it was all done.

well, after reading the rest of your mail, I'm not so
sure...

> After we thought we had reached agreement, Stallman realized that
> there are two interpretations of what will happen next:
> 
>     1. BeOpen releases a version for which the license is, purely and
>     simply, the GPL.
> 
>     2. BeOpen releases a version which states the GPL as the license,
>     and also states the CNRI license as applying with its text to part
>     of the code.

"to part of the code"?

are you saying the 1.6 will be the last version that is
truly free for commercial use???

what parts would be GPL-only?

</F>




From guido at beopen.com  Sun Sep  3 16:35:31 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 09:35:31 -0500
Subject: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: Your message of "Sun, 03 Sep 2000 15:36:52 +0200."
             <005a01c015ac$079f1c00$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCAEGFHDAA.tim_one@email.msn.com> <39AF83F9.67DA7A0A@lemburg.com> <dcwvgu56li.fsf@pacific.beopen.com> <00a401c0158f$24dc5520$766940d5@hagrid> <200009031403.JAA11856@cj20424-a.reston1.va.home.com>  
            <005a01c015ac$079f1c00$766940d5@hagrid> 
Message-ID: <200009031435.JAA12281@cj20424-a.reston1.va.home.com>

> guido wrote:
> 
> > I want 2.0b1 to be released (don't you?) so I put an extra effort in
> > to round up Stallman and make sure he and Kahn got on the phone to get
> > a resolution, and for a blissful few hours I believed it was all done.
> 
> well, after reading the rest of your mail, I'm not so
> sure...

Agreed. :-(

> > After we thought we had reached agreement, Stallman realized that
> > there are two interpretations of what will happen next:
> > 
> >     1. BeOpen releases a version for which the license is, purely and
> >     simply, the GPL.
> > 
> >     2. BeOpen releases a version which states the GPL as the license,
> >     and also states the CNRI license as applying with its text to part
> >     of the code.
> 
> "to part of the code"?
> 
> are you saying the 1.6 will be the last version that is
> truly free for commercial use???
> 
> what parts would be GPL-only?

Aaaaargh!  Please don't misunderstand me!  No part of Python will be
GPL-only!  At best we'll dual license.

This was quoted directly from Stallman's mail about this issue.  *He*
doesn't care about the other half of the dual license, so he doens't
mention it.

Sorry!!!!!!!!!!!!!!!!!!!!!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sun Sep  3 17:18:07 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 10:18:07 -0500
Subject: [Python-Dev] New commands to display license, credits, copyright info
Message-ID: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>

The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
with an "All Rights Reserved" for each -- according to CNRI's wishes).

This will cause a lot of scrolling at the start of a session.

Does anyone care?

Bob Weiner (my boss at BeOpen) suggested that we could add commands
to display such information instead.  Here's a typical suggestion with
his idea implemented:

    Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "license" or "credits" for this information.
    >>> copyright
    Copyright (c) 2000 BeOpen.com; All Rights Reserved.
    Copyright (c) 1995-2000 Corporation for National Research Initiatives;
    All Rights Reserved.
    Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
    All Rights Reserved.

    >>> credits
    A BeOpen PythonLabs-led production.

    >>> license
    HISTORY OF THE SOFTWARE
    =======================

    Python was created in the early 1990s by Guido van Rossum at Stichting
    Mathematisch Centrum (CWI) in the Netherlands as a successor of a
    language called ABC.  Guido is Python's principal author, although it
        .
        .(etc)
        .
    Hit Return for more, or q (and Return) to quit: q

    >>>

How would people like this?  (The blank line before the prompt is
unavoidable due to the mechanics of how objects are printed.)

Any suggestions for what should go in the "credits" command?

(I considered taking the detailed (messy!) GCC version info out as
well, but decided against it.  There's a bit of a tradition in bug
reports to quote the interpreter header and showing the bug in a
sample session; the compiler version is often relevant.  Expecting
that bug reporters will include this information manually won't work.
Instead, I broke it up in two lines.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at alum.mit.edu  Sun Sep  3 17:53:08 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 10:53:08 -0500 (CDT)
Subject: [Python-Dev] New commands to display licence, credits, copyright info
Message-ID: <14770.29668.639079.511087@sirius.net.home>

I like Bob W's suggestion a lot.  It is more open-ended and scalable
than just continuing to add more and more lines to the startup
messages.  I assume these commands would only be in effect in
interactive mode, right?

You could also maybe add a "help" command, which, if nothing else,
could get people pointed at the online tutorial/manuals.

And, by all means, please keep the compiler version in the startup
message!



From guido at beopen.com  Sun Sep  3 18:59:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 03 Sep 2000 11:59:55 -0500
Subject: [Python-Dev] New commands to display licence, credits, copyright info
In-Reply-To: Your message of "Sun, 03 Sep 2000 10:53:08 EST."
             <14770.29668.639079.511087@sirius.net.home> 
References: <14770.29668.639079.511087@sirius.net.home> 
Message-ID: <200009031659.LAA14864@cj20424-a.reston1.va.home.com>

> I like Bob W's suggestion a lot.  It is more open-ended and scalable
> than just continuing to add more and more lines to the startup
> messages.  I assume these commands would only be in effect in
> interactive mode, right?

Actually, for the benefit of tools like IDLE (which have an
interactive read-eval-print loop but don't appear to be interactive
during initialization), they are always added.  They are implemented
as funny builtins, whose repr() prints the info and then returns "".

> You could also maybe add a "help" command, which, if nothing else,
> could get people pointed at the online tutorial/manuals.

Sure -- and "doc".  Later, after 2.0b1.

> And, by all means, please keep the compiler version in the startup
> message!

Will do.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at alum.mit.edu  Sun Sep  3 18:02:09 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 11:02:09 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix, etc
Message-ID: <14770.30209.733300.519614@sirius.net.home>

Skip Montanaro write:

> When I configure --without-threads, the script runs much longer,
> making it past 18068.  It conks out in the same spot, however,
> trying to print 18069.

I am utterly unable to reproduce this.  With "ulimit -s unlimited" and
a no-threads version of Python, "find_recursionlimit" ran overnight on
my system and got up to a recursion depth of 98,400 before I killed it
off.  It was using 74MB of stack space at this point, and my system
was running *really* slow (probably because my pathetic little home
system only has 64MB of physical memory!).

Are you absolutely sure that when you built your non-threaded Python
you did a thorough housecleaning, eg. "make clobber"?  Sometimes I get
paranoid and type "make distclean", just to be sure - but this
shouldn't be necessary, right?

Can you give me more info about your system?  I'm at kernel 2.2.16,
gcc 2.95.2 and glibc-2.1.3.  How about you?

I've got to know what's going on here, because your experimental
results don't conform to my theory, and I'd rather change your results
than have to change my theory <wink>

     quizzically yr's,

		  -C







From tim_one at email.msn.com  Sun Sep  3 19:17:34 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 13:17:34 -0400
Subject: [License-py20] Re: [Python-Dev] Re: Conflict with the GPL
In-Reply-To: <005a01c015ac$079f1c00$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMJHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> are you saying the 1.6 will be the last version that is
> truly free for commercial use???

If this is a serious question, it disturbs me, because it would demonstrate
a massive meltdown in trust between the community and BeOpen PythonLabs.

If we were willing to screw *any* of Python's

   + Commercial users.
   + Open Source users.
   + GPL users.

we would have given up a month ago (when we first tried to release 2b1 with
a BSD-style license but got blocked).  Unfortunately, the only power we have
in this now is the power to withhold release until the other parties (CNRI
and FSF) agree on a license they can live with too.  If the community thinks
Guido would sell out Python's commercial users to get the FSF's blessing,
*or vice versa*, maybe we should just give up on the basis that we've lost
peoples' trust anyway.  Delaying the releases time after time sure isn't
helping BeOpen's bottom line.





From tim_one at email.msn.com  Sun Sep  3 19:43:15 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 13:43:15 -0400
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMKHDAA.tim_one@email.msn.com>

[Guido]
> The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
> with an "All Rights Reserved" for each -- according to CNRI's wishes).
>
> This will cause a lot of scrolling at the start of a session.
>
> Does anyone care?

I personally hate it:

C:\Code\python\dist\src\PCbuild>python
Python 2.0b1 (#0, Sep  3 2000, 00:31:47) [MSC 32 bit (Intel)] on win32
Copyright (c) 2000 BeOpen.com; All Rights Reserved.
Copyright (c) 1995-2000 Corporation for National Research Initiatives;
All Rights Reserved.
Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
All Rights Reserved.
>>>

Besides being plain ugly, under Win9x DOS boxes are limited to a max height
of 50 lines, and that's also the max buffer size.  This mass of useless
verbiage (I'm still a programmer 20 minutes of each day <0.7 wink>) has
already interfered with my ability to test the Windows version of Python
(half the old build's stuff I wanted to compare the new build's behavior
with scrolled off the screen the instant I started the new build!).

> Bob Weiner (my boss at BeOpen) suggested that we could add commands
> to display such information instead.  Here's a typical suggestion with
> his idea implemented:
>
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03)
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
>     >>> ...

Much better.

+1.





From tim_one at email.msn.com  Sun Sep  3 21:59:36 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 15:59:36 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src LICENSE,1.1.2.7,1.1.2.8
In-Reply-To: <00a501c0158f$25a5bfa0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMPHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> I didn't really think I would end up in a situation where people
> can take code I've written, make minor modifications to it, and re-
> release it in source form in a way that makes it impossible for me
> to use it...)

People have *always* been able to do that, /F.  The CWI license was
GPL-compatible (according to RMS), so anyone all along has been able to take
the Python distribution in whole or in part and re-release it under the
GPL -- or even more restrictive licenses than that.  Heck, they don't even
have to reveal their modifications to your code if they don't feel like it
(although they would have to under the GPL).

So there's nothing new here.  In practice, I don't think anyone yet has felt
abused (well, not by *this* <wink>).





From tim_one at email.msn.com  Sun Sep  3 22:22:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 16:22:43 -0400
Subject: [Python-Dev] Copyright gag
In-Reply-To: <200009030921.LAA08963@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENBHDAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> Sent: Sunday, September 03, 2000 5:22 AM
> To: Python core developers
> Subject: [Python-Dev] Copyright gag
>
> Even CVS got confused about the Python's copyright <wink>
>
> ~> cvs update
> ...
> cvs server: Updating Demo/zlib
> cvs server: Updating Doc
> cvs server: nothing known about Doc/COPYRIGHT
> cvs server: Updating Doc/api
> cvs server: Updating Doc/dist
> ...

Yes, we're all seeing that.  I filed a bug report on it with SourceForge; no
resolution yet; we can't get at the CVS files directly (for "security
reasons"), so they'll have to find the damage & fix it themselves.






From trentm at ActiveState.com  Sun Sep  3 23:10:43 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 3 Sep 2000 14:10:43 -0700
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Sep 03, 2000 at 10:18:07AM -0500
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <20000903141043.B28584@ActiveState.com>

On Sun, Sep 03, 2000 at 10:18:07AM -0500, Guido van Rossum wrote:
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2

Yes, I like getting rid of the copyright verbosity.

>     Type "copyright", "license" or "credits" for this information.
>     >>> copyright
>     >>> credits
>     >>> license
>     >>>

... but do we need these. Canwe not just add a -V or --version or
--copyright, etc switches. Not a big deal, though.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From nascheme at enme.ucalgary.ca  Mon Sep  4 01:28:04 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sun, 3 Sep 2000 17:28:04 -0600
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Sun, Sep 03, 2000 at 10:18:07AM -0500
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
Message-ID: <20000903172804.A20336@keymaster.enme.ucalgary.ca>

On Sun, Sep 03, 2000 at 10:18:07AM -0500, Guido van Rossum wrote:
> Does anyone care?

Yes.  Athough not too much.

> Bob Weiner (my boss at BeOpen) suggested that we could add commands
> to display such information instead.

Much nicer except for one nit.

>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
                                                   ^^^^

For what information?

  Neil



From jeremy at beopen.com  Mon Sep  4 01:59:12 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Sun, 3 Sep 2000 19:59:12 -0400 (EDT)
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <20000903172804.A20336@keymaster.enme.ucalgary.ca>
References: <200009031518.KAA12926@cj20424-a.reston1.va.home.com>
	<20000903172804.A20336@keymaster.enme.ucalgary.ca>
Message-ID: <14770.58832.801784.267646@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

  >> Python 2.0b1 (#134, Sep 3 2000, 10:04:03) 
  >> [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 
  >> Type "copyright", "license" or "credits" for this information.
  NS>                                             ^^^^
  NS> For what information?

I think this is a one-line version of 'Type "copyright" for copyright
information, "license" for license information, or "credits" for
credits information.'

I think the meaning is clear if the phrasing is awkward.  Would 'that'
be any better than 'this'?

Jeremy



From root at buffalo.fnal.gov  Mon Sep  4 02:00:00 2000
From: root at buffalo.fnal.gov (root)
Date: Sun, 3 Sep 2000 19:00:00 -0500
Subject: [Python-Dev] New commands to display license, credits, copyright info
Message-ID: <200009040000.TAA19857@buffalo.fnal.gov>

Jeremy wrote:

 > I think the meaning is clear if the phrasing is awkward.  Would 'that'
 > be any better than 'this'?

To my ears, "that" is just as awkward as "this".  But in this context,
I think "more" gets the point across and sounds much more natural.




From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 02:07:03 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 02:07:03 +0200 (CEST)
Subject: [Python-Dev] libdb on by default, but no db.h
Message-ID: <200009040007.CAA14488@python.inrialpes.fr>

On my AIX combo, configure assumes --with-libdb (yes) but reports that

...
checking for db_185.h... no
checking for db.h... no
...

This leaves the bsddbmodule enabled but it can't compile, obviously.
So this needs to be fixed ASAP.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 03:16:20 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 03:16:20 +0200 (CEST)
Subject: [Python-Dev] New commands to display license, credits, copyright info
In-Reply-To: <200009031518.KAA12926@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 03, 2000 10:18:07 AM
Message-ID: <200009040116.DAA14774@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> The copyright in 2.0 will be 5 or 6 lines (three copyright statements,
> with an "All Rights Reserved" for each -- according to CNRI's wishes).
> 
> This will cause a lot of scrolling at the start of a session.
> 
> Does anyone care?

Not much, but this is annoying information anyway :-)

> 
>     Python 2.0b1 (#134, Sep  3 2000, 10:04:03) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "license" or "credits" for this information.
>     >>> copyright
>     Copyright (c) 2000 BeOpen.com; All Rights Reserved.
>     Copyright (c) 1995-2000 Corporation for National Research Initiatives;
>     All Rights Reserved.
>     Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam;
>     All Rights Reserved.

A semicolon before "All rights reserved" is ugly. IMO, it should be a period.
All rights reserved probably needs to go to a new line for the three
copyright holders. Additionally, they can be seperated by a blank line
for readability.

Otherwise, I like the proposed "type ... for more information".

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Mon Sep  4 03:10:26 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 20:10:26 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix, etc
In-Reply-To: <14770.30209.733300.519614@sirius.net.home>
References: <14770.30209.733300.519614@sirius.net.home>
Message-ID: <14770.63106.529258.156519@beluga.mojam.com>

    Charles> I am utterly unable to reproduce this.  With "ulimit -s
    Charles> unlimited" and a no-threads version of Python,
    Charles> "find_recursionlimit" ran overnight on my system and got up to
    Charles> a recursion depth of 98,400 before I killed it off.

Mea culpa.  It seems I forgot the "ulimit -s unlimited" command.  Keep your
theory, but get a little more memory.  It only took me a few seconds to
exceed a recursion depth of 100,000 after properly setting the stack size
limit... ;-)

Skip






From cgw at alum.mit.edu  Mon Sep  4 04:33:24 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Sun, 3 Sep 2000 21:33:24 -0500
Subject: [Python-Dev] Thread problems on Linux
Message-ID: <200009040233.VAA27866@sirius>

No, I still don't have the answer, but I came across a very interesting
bit in the `info' files for glibc-2.1.3.  Under a heading "Specific Advice
for Linux Systems", along with a bunch of info about installing glibc,
is this gem:

 >    You cannot use `nscd' with 2.0 kernels, due to bugs in the
 > kernel-side thread support.  `nscd' happens to hit these bugs
 > particularly hard, but you might have problems with any threaded
 > program.

Now, they are talking about 2.0 and I assume everyone here running Linux
is running 2.2.  However it makes one wonder whether all the bugs in
kernel-side thread support are really fixed in 2.2.  One of these days
we'll figure it out...




From tim_one at email.msn.com  Mon Sep  4 04:44:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 22:44:28 -0400
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <200009040233.VAA27866@sirius>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>

Did we ever get a little "pure C" program that illustrates the mystery here?
That's probably still the only way to get a Linux guru interested, and also
the best way to know whether the problem is fixed in a future release (i.e.,
by running the sucker and seeing whether it still misbehaves).

I could believe, e.g., that they fixed pthread locks fine, but that there's
still a subtle problem with pthread condition vrbls.  To the extent Jeremy's
stacktraces made any sense, they showed insane condvar symptoms (a parent
doing a pthread_cond_wait yet chewing cycles at a furious pace).





From tim_one at email.msn.com  Mon Sep  4 05:11:09 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 3 Sep 2000 23:11:09 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <007901c014c0$852eff60$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEOCHDAA.tim_one@email.msn.com>

[Fredrik Lundh]
> just fyi, Tkinter seems to be extremely unstable on Win95 and
> Win98FE (when shut down, the python process grabs the key-
> board and hangs.  the only way to kill the process is to reboot)
>
> the same version of Tk (wish) works just fine...

So what can we do about this?  I'm wary about two things:

1. Thomas reported one instance of Win98FE rot, of a kind that simply
   plagues Windows for any number of reasons.  He wasn't able to
   reproduce it.  So while I've noted his report, I'm giving it little
   weight so far.

2. I never use Tkinter, except indirectly for IDLE.  I've been in and
   out of 2b1 IDLE on Win98SE all day and haven't seen a hint of trouble.

   But you're a Tkinter power user of the highest order.  So one thing
   I'm wary of is that you may have magical Tcl/Tk envars (or God only
   knows what else) set up to deal with the multiple copies of Tcl/Tk
   I'm betting you have on your machine.  In fact, I *know* you have
   multiple Tcl/Tks sitting around because of your wish comment:
   the Python installer no longer installs wish, so you got that from
   somewhere else.  Are you positive you're not mixing versions
   somehow?  If anyone could mix them in a way we can't stop, it's
   you <wink>.

If anyone else is having Tkinter problems, they haven't reported them.
Although I doubt few have tried it!

In the absence of more helpers, can you pass on a specific (small if
possible) program that exhibits the "hang" problem?  And by "extremely
unstable", do you mean that there are many strange problems, or is the "hang
on exit" problem the only one?

Thanks in advance!

beleagueredly y'rs  - tim





From skip at mojam.com  Mon Sep  4 05:12:06 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 22:12:06 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040007.CAA14488@python.inrialpes.fr>
References: <200009040007.CAA14488@python.inrialpes.fr>
Message-ID: <14771.4870.954882.513141@beluga.mojam.com>


    Vlad> On my AIX combo, configure assumes --with-libdb (yes) but reports
    Vlad> that

    Vlad> ...
    Vlad> checking for db_185.h... no
    Vlad> checking for db.h... no
    Vlad> ...

    Vlad> This leaves the bsddbmodule enabled but it can't compile,
    Vlad> obviously.  So this needs to be fixed ASAP.

Oops.  Please try the attached patch and let me know it it runs better.
(Don't forget to run autoconf.)  Besides fixing the problem you
reported, it tells users why bsddb was not supported if they asked for it
but it was not enabled.

Skip

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: configure.in.patch
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000903/2ed2fa8b/attachment-0001.txt>

From greg at cosc.canterbury.ac.nz  Mon Sep  4 05:21:14 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 04 Sep 2000 15:21:14 +1200 (NZST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <200009021407.QAA29710@python.inrialpes.fr>
Message-ID: <200009040321.PAA18947@s454.cosc.canterbury.ac.nz>

Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov):

> The point is that we have two types of garbage: collectable and
> uncollectable.

I don't think these are the right terms. The collector can
collect the "uncollectable" garbage all right -- what it can't
do is *dispose* of it. So it should be called "undisposable"
or "unrecyclable" or "undigestable" something.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 05:51:31 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 05:51:31 +0200 (CEST)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <14771.4870.954882.513141@beluga.mojam.com> from "Skip Montanaro" at Sep 03, 2000 10:12:06 PM
Message-ID: <200009040351.FAA19784@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> Oops.  Please try the attached patch and let me know it it runs better.

Runs fine. Thanks!

After looking again at Modules/Setup.config, I wonder whether it would
be handy to add a configure option --with-shared (or similar) which would
uncomment #*shared* there and in Setup automatically (in line with the
other recent niceties like --with-pydebug).

Uncommenting them manually in two files now is a pain... :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Mon Sep  4 06:06:40 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 23:06:40 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040351.FAA19784@python.inrialpes.fr>
References: <14771.4870.954882.513141@beluga.mojam.com>
	<200009040351.FAA19784@python.inrialpes.fr>
Message-ID: <14771.8144.959081.410574@beluga.mojam.com>

    Vlad> After looking again at Modules/Setup.config, I wonder whether it
    Vlad> would be handy to add a configure option --with-shared (or
    Vlad> similar) which would uncomment #*shared* there and in Setup
    Vlad> automatically (in line with the other recent niceties like
    Vlad> --with-pydebug).

    Vlad> Uncommenting them manually in two files now is a pain... :-)

Agreed.  I'll submit a patch.

Skip



From skip at mojam.com  Mon Sep  4 06:16:52 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 3 Sep 2000 23:16:52 -0500 (CDT)
Subject: [Python-Dev] libdb on by default, but no db.h
In-Reply-To: <200009040351.FAA19784@python.inrialpes.fr>
References: <14771.4870.954882.513141@beluga.mojam.com>
	<200009040351.FAA19784@python.inrialpes.fr>
Message-ID: <14771.8756.760841.38442@beluga.mojam.com>

    Vlad> After looking again at Modules/Setup.config, I wonder whether it
    Vlad> would be handy to add a configure option --with-shared (or
    Vlad> similar) which would uncomment #*shared* there and in Setup
    Vlad> automatically (in line with the other recent niceties like
    Vlad> --with-pydebug).

On second thought, I think this is not a good idea right now because
Modules/Setup is not usually fiddled by the configure step.  If "#*shared*"
existed in Modules/Setup and the user executed "./configure --with-shared",
they'd be disappointed that the modules declared in Modules/Setup following
that line weren't built as shared objects.

Skip




From greg at cosc.canterbury.ac.nz  Mon Sep  4 06:34:02 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 04 Sep 2000 16:34:02 +1200 (NZST)
Subject: [Python-Dev] New commands to display license, credits,
 copyright info
In-Reply-To: <14770.58832.801784.267646@bitdiddle.concentric.net>
Message-ID: <200009040434.QAA18957@s454.cosc.canterbury.ac.nz>

Jeremy Hylton <jeremy at beopen.com>:

> I think the meaning is clear if the phrasing is awkward.  Would 'that'
> be any better than 'this'?

How about "for more information"?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Mon Sep  4 10:08:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 4 Sep 2000 04:08:27 -0400
Subject: [Python-Dev] ME so mmap
In-Reply-To: <DOEGJPEHJOJKDFNLNCHIKEDJCAAA.audun@mindspring.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEOLHDAA.tim_one@email.msn.com>

Audun S. Runde mailto:audun at mindspring.com wins a Fabulous Prize for being
our first Windows ME tester!  Also our only, and I think he should get
another prize just for that.

The good news is that the creaky old Wise installer worked.  The bad news is
that we've got a Windows-ME-specific std test failure, in test_mmap.

This is from the installer available via anonymous FTP from
python.beopen.com,

     /pub/windows/beopen-python2b1p2-20000901.exe
     5,783,115 bytes

and here's the meat of the bad news in Audun's report:

> PLATFORM 2.
> Windows ME
> (version/build 4.90.3000 aka. "Techical Beta Special Edition"
> -- claimed to be identical to the shipping version),
> no previous Python install
> =============================================================
>
> + Try
>     python lib/test/regrtest.py
>
> --> results:
> 76 tests OK.
> 1 test failed: test_mmap (see below)
> 23 tests skipped (al, cd, cl, crypt, dbm, dl, fcntl, fork1, gdbm, gl, grp,
> imgfile, largefile, linuxaudiodev, minidom, nis, openpty, poll, pty, pwd,
> signal, sunaudiodev, timing)
>
> Rerun of test_mmap.py:
> ----------------------
> C:\Python20\Lib\test>..\..\python test_mmap.py
> Traceback (most recent call last):
>   File "test_mmap.py", line 121, in ?
>     test_both()
>   File "test_mmap.py", line 18, in test_both
>     m = mmap.mmap(f.fileno(), 2 * PAGESIZE)
> WindowsError: [Errno 6] The handle is invalid
>
> C:\Python20\Lib\test>
>
>
> --> Please let me know if there is anything I can do to help with
> --> this -- but I might need detailed instructions ;-)

So we're not even getting off the ground with mmap on ME -- it's dying in
the mmap constructor.  I'm sending this to Mark Hammond directly because he
was foolish enough <wink> to fix many mmap-on-Windows problems, but if any
other developer has access to ME feel free to grab this joy away from him.
There are no reports of test_mmap failing on any other flavor of Windows (&
clean reports from 95, 2000, NT, 98), looks extremely unlikely that it's a
flaw in the installer, and it's a gross problem right at the start.

Best guess now is that it's a bug in ME.  What?  A bug in a new flavor of
Windows?!  Na, couldn't be ...

may-as-well-believe-that-money-doesn't-grow-on-trees-ly y'rs  - tim





From tim_one at email.msn.com  Mon Sep  4 10:49:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 4 Sep 2000 04:49:12 -0400
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <200009021500.RAA00776@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEONHDAA.tim_one@email.msn.com>

[Vladimir Marangozov, heroically responds to pleas for Windows help!]

>     /pub/windows/beopen-python2b1p2-20000901.exe
>     5,783,115 bytes
>
> In case my feedback matters, being a Windows amateur,

That's *good*:  amateurs make better testers because they're less prone to
rationalize away problems or gloss over things they needed to fix by hand.

> the installation went smoothly on my home P100

You're kidding, right?  They give away faster processors in cereal boxes now
<wink>.

> with some early Win95 pre-release.

Brrrrrrr.  Even toxic waste dumps won't accept *those* anymore!

> In the great Windows tradition, I was asked to reboot & did so.

That's interesting -- first report of a reboot I've gotten.  But it makes
sense:  everyone else who has tried this is an eager Windows beta tester or
a Python Windows developer, so all their system files are likely up to date.
Windows only makes you reboot if it has to *replace* a system file with a
newer one from the install (unlike Unix, Windows won't let you "unlink" a
file that's in use; that's why they have to replace popular system files
during the reboot, *before* Windows proper starts up).

> The regression tests passed in console mode.

Frankly, I'm amazed!  Please don't test anymore <0.9 wink>.

> Then launched successfully IDLE. In IDLE I get *beep* sounds every
< time I hit RETURN without typing anything.  I was able to close both
> the console and IDLE without problems.

Assuming you saw Guido's msg about the *beep*s.  If not, it's an IDLE buglet
and you're not alone.  Won't be fixed for 2b1, maybe by 2.0.

> Haven't tried the uninstall link, though.

It will work -- kinda.  It doesn't really uninstall everything on any flavor
of Windows.  I think BeOpen.com should agree to buy me an installer newer
than your Win95 prerelease.

> don't-ask-me-any-questions-about-Windows'ly y'rs

I was *going* to, and I still am.  And your score is going on your Permanent
Record, so don't screw this up!  But since you volunteered such a nice and
helpful test report, I'll give you a relatively easy one:  which company
sells Windows?

A. BeOpen PythonLabs
B. ActiveState
C. ReportLabs
D. Microsoft
E. PythonWare
F. Red Hat
G. General Motors
H. Corporation for National Research Initiatives
I. Free Software Foundation
J. Sun Microsystems
K. National Security Agency

hint:-it's-the-only-one-without-an-"e"-ly y'rs  - tim





From nascheme at enme.ucalgary.ca  Mon Sep  4 16:18:28 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 4 Sep 2000 08:18:28 -0600
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>; from Tim Peters on Sun, Sep 03, 2000 at 10:44:28PM -0400
References: <200009040233.VAA27866@sirius> <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>
Message-ID: <20000904081828.B23753@keymaster.enme.ucalgary.ca>

The pthread model does not map will into the Linux clone model.  The
standard seems to assume that threads are implemented as a process.
Linus is adding some extra features in 2.4 which may help (thread
groups).  We will see if the glibc maintainers can make use of these.

I'm thinking of creating a thread_linux header file.  Do you think that
would be a good idea?  clone() seems to be pretty easy to use although
it is quite low level.

  Neil



From guido at beopen.com  Mon Sep  4 17:40:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 04 Sep 2000 10:40:58 -0500
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: Your message of "Mon, 04 Sep 2000 08:18:28 CST."
             <20000904081828.B23753@keymaster.enme.ucalgary.ca> 
References: <200009040233.VAA27866@sirius> <LNBBLJKPBEHFEDALKOLCGEOBHDAA.tim_one@email.msn.com>  
            <20000904081828.B23753@keymaster.enme.ucalgary.ca> 
Message-ID: <200009041540.KAA23263@cj20424-a.reston1.va.home.com>

> The pthread model does not map will into the Linux clone model.  The
> standard seems to assume that threads are implemented as a process.
> Linus is adding some extra features in 2.4 which may help (thread
> groups).  We will see if the glibc maintainers can make use of these.
> 
> I'm thinking of creating a thread_linux header file.  Do you think that
> would be a good idea?  clone() seems to be pretty easy to use although
> it is quite low level.

This seems nice at first, but probably won't work too well when you
consider embedding Python in applications that use the Posix threads
library.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at alum.mit.edu  Mon Sep  4 17:02:03 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Mon, 4 Sep 2000 10:02:03 -0500
Subject: [Python-Dev] mail sent as "root"
Message-ID: <200009041502.KAA05864@buffalo.fnal.gov>

sorry for the mail sent as "root" - d'oh.  I still am not able to
send mail from fnal.gov to python.org (no route to host) and am
playing some screwy games to get my mail delivered.




From cgw at alum.mit.edu  Mon Sep  4 17:52:42 2000
From: cgw at alum.mit.edu (Charles G Waldman)
Date: Mon, 4 Sep 2000 10:52:42 -0500
Subject: [Python-Dev] Thread problems on Linux
Message-ID: <200009041552.KAA06048@buffalo.fnal.gov>

Neil wrote:

>I'm thinking of creating a thread_linux header file.  Do you think that 
>would be a good idea?  clone() seems to be pretty easy to use although 
>it is quite low level. 
 
Sounds like a lot of work to me.   The pthread library gets us two
things (essentially) - a function to create threads, which you could
pretty easily replace with clone(), and other functions to handle
mutexes and conditions.  If you replace pthread_create with clone
you have a lot of work to do to implement the locking stuff... Of
course, if you're willing to do this work, then more power to you.
But from my point of view, I'm at a site where we're using pthreads
on Linux in non-Python applications as well, so I'm more interested
in diagnosing and trying to fix (or at least putting together a   
detailed and coherent bug report on) the platform bugs, rather than
trying to work around them in the Python interpreter.





From Vladimir.Marangozov at inrialpes.fr  Mon Sep  4 20:11:33 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 4 Sep 2000 20:11:33 +0200 (CEST)
Subject: [Python-Dev] Even more prerelease Python fun on Windows!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEONHDAA.tim_one@email.msn.com> from "Tim Peters" at Sep 04, 2000 04:49:12 AM
Message-ID: <200009041811.UAA21177@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov, heroically responds to pleas for Windows help!]
> 
> That's *good*:  amateurs make better testers because they're less prone to
> rationalize away problems or gloss over things they needed to fix by hand.

Thanks. This is indeed the truth.

> 
> > the installation went smoothly on my home P100
> 
> You're kidding, right?  They give away faster processors in cereal boxes now
> <wink>.

No. I'm proud to possess a working Pentium 100 with the F0 0F bug. This
is a genuine snapshot of the advances of a bunch of technologies at the
end of the XX century.

> 
> > with some early Win95 pre-release.
> 
> Brrrrrrr.  Even toxic waste dumps won't accept *those* anymore!

see above.

> 
> > Haven't tried the uninstall link, though.
> 
> It will work -- kinda.  It doesn't really uninstall everything on any flavor
> of Windows.  I think BeOpen.com should agree to buy me an installer newer
> than your Win95 prerelease.

Wasn't brave enough to reboot once again <wink>.

> 
> > don't-ask-me-any-questions-about-Windows'ly y'rs
> 
> I was *going* to, and I still am.

Seriously, if you need more feedback, you'd have to give me click by click
instructions. I'm in trouble each time I want to do any real work within
the Windows clickodrome.

> And your score is going on your Permanent Record, so don't screw this up!
> But since you volunteered such a nice and helpful test report, I'll give
> you a relatively easy one:  which company sells Windows?
> 
> A. BeOpen PythonLabs
> B. ActiveState
> C. ReportLabs
> D. Microsoft
> E. PythonWare
> F. Red Hat
> G. General Motors
> H. Corporation for National Research Initiatives
> I. Free Software Foundation
> J. Sun Microsystems
> K. National Security Agency
> 
> hint:-it's-the-only-one-without-an-"e"-ly y'rs  - tim
> 

Hm. Thanks for the hint! Let's see. It's not "me" for sure. Could
be "you" though <wink>. I wish it was General Motors...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nascheme at enme.ucalgary.ca  Mon Sep  4 21:28:38 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 4 Sep 2000 13:28:38 -0600
Subject: [Python-Dev] Thread problems on Linux
In-Reply-To: <200009041504.KAA05892@buffalo.fnal.gov>; from Charles G Waldman on Mon, Sep 04, 2000 at 10:04:40AM -0500
References: <200009041504.KAA05892@buffalo.fnal.gov>
Message-ID: <20000904132838.A25571@keymaster.enme.ucalgary.ca>

On Mon, Sep 04, 2000 at 10:04:40AM -0500, Charles G Waldman wrote:
>If you replace pthread_create with clone you have a lot of work to do
>to implement the locking stuff...

Locks exist in /usr/include/asm.  It is Linux specific but so is
clone().

  Neil



From thomas at xs4all.net  Mon Sep  4 22:14:39 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 4 Sep 2000 22:14:39 +0200
Subject: [Python-Dev] Vacation
Message-ID: <20000904221438.U12695@xs4all.nl>

I'll be offline for two weeks, enjoying a sunny (hopfully!) holiday in
southern Italy. I uploaded the docs I had for augmented assignment; not
terribly much I'm afraid :P We had some trouble at work over the weekend,
which cost me most of the time I thought I had to finish some of this up.

(For the developers among you that, like me, do a bit of sysadmining on the
side: one of our nameservers was hacked, either through password-guessing
(unlikely), sniffing (unlikely), a hole in ssh (1.2.26, possible but
unlikely) or a hole in named (BIND 8.2.2-P5, very unlikely). There was a
copy of the named binary in /tmp under an obscure filename, which leads us
to believe it was the latter -- which scares the shit out of me personally,
as anything before P3 was proven to be insecure, and the entire sane world
and their dog runs P5. Possibly it was 'just' a bug in Linux/RedHat, though.
Cleaning up after scriptkiddies, a great way to spend your weekend before
your vacation, let me tell you! :P)

I'll be back on the 19th, plenty of time left to do beta testing after that
:)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From rob at hooft.net  Tue Sep  5 08:15:04 2000
From: rob at hooft.net (Rob W. W. Hooft)
Date: Tue, 5 Sep 2000 08:15:04 +0200 (CEST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.52,1.53
In-Reply-To: <200009050438.VAA03390@slayer.i.sourceforge.net>
References: <200009050438.VAA03390@slayer.i.sourceforge.net>
Message-ID: <14772.36712.451676.957918@temoleh.chem.uu.nl>

! Augmented Assignment
! --------------------
!
! This must have been the most-requested feature of the past years!
! Eleven new assignment operators were added:
!
!     += -+ *= /= %= **= <<= >>= &= ^= |=

Interesting operator "-+" in there! I won't submit this as patch
to sourceforge....

Index: dist/src/Misc/NEWS
===================================================================
RCS file: /cvsroot/python/python/dist/src/Misc/NEWS,v
retrieving revision 1.53
diff -u -c -r1.53 NEWS
cvs server: conflicting specifications of output style
*** dist/src/Misc/NEWS  2000/09/05 04:38:34     1.53
--- dist/src/Misc/NEWS  2000/09/05 06:14:16
***************
*** 66,72 ****
  This must have been the most-requested feature of the past years!
  Eleven new assignment operators were added:
  
!     += -+ *= /= %= **= <<= >>= &= ^= |=
  
  For example,
  
--- 66,72 ----
  This must have been the most-requested feature of the past years!
  Eleven new assignment operators were added:
  
!     += -= *= /= %= **= <<= >>= &= ^= |=
  
  For example,
  


Regards,

Rob Hooft

-- 
=====   rob at hooft.net          http://www.hooft.net/people/rob/  =====
=====   R&D, Nonius BV, Delft  http://www.nonius.nl/             =====
===== PGPid 0xFA19277D ========================== Use Linux! =========



From bwarsaw at beopen.com  Tue Sep  5 09:23:55 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 5 Sep 2000 03:23:55 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.52,1.53
References: <200009050438.VAA03390@slayer.i.sourceforge.net>
	<14772.36712.451676.957918@temoleh.chem.uu.nl>
Message-ID: <14772.40843.669856.756485@anthem.concentric.net>

>>>>> "RWWH" == Rob W W Hooft <rob at hooft.net> writes:

    RWWH> Interesting operator "-+" in there! I won't submit this as
    RWWH> patch to sourceforge....

It's Python 2.0's way of writing "no op" :)

I've already submitted this internally.  Doubt it will make it into
2.0b1, but we'll get it into 2.0 final.

-Barry



From mbel44 at dial.pipex.net  Tue Sep  5 13:19:42 2000
From: mbel44 at dial.pipex.net (Toby Dickenson)
Date: Tue, 05 Sep 2000 12:19:42 +0100
Subject: [Python-Dev] Re: [I18n-sig] ustr
In-Reply-To: <200007071244.HAA03694@cj20424-a.reston1.va.home.com>
References: <r39bmsc6remdupiv869s5agm46m315ebeq@4ax.com>   <3965BBE5.D67DD838@lemburg.com> <200007071244.HAA03694@cj20424-a.reston1.va.home.com>
Message-ID: <vhl9rsclpk9e89oaeehpg7sec79ar8cdru@4ax.com>

On Fri, 07 Jul 2000 07:44:03 -0500, Guido van Rossum
<guido at beopen.com> wrote:

We debated a ustr function in July. Does anyone have this in hand? I
can prepare a patch if necessary.

>> Toby Dickenson wrote:
>> > 
>> > I'm just nearing the end of getting Zope to play well with unicode
>> > data. Most of the changes involved replacing a call to str, in
>> > situations where either a unicode or narrow string would be
>> > acceptable.
>> > 
>> > My best alternative is:
>> > 
>> > def convert_to_something_stringlike(x):
>> >     if type(x)==type(u''):
>> >         return x
>> >     else:
>> >         return str(x)
>> > 
>> > This seems like a fundamental operation - would it be worth having
>> > something similar in the standard library?
>
>Marc-Andre Lemburg replied:
>
>> You mean: for Unicode return Unicode and for everything else
>> return strings ?
>> 
>> It doesn't fit well with the builtins str() and unicode(). I'd
>> say, make this a userland helper.
>
>I think this would be helpful to have in the std library.  Note that
>in JPython, you'd already use str() for this, and in Python 3000 this
>may also be the case.  At some point in the design discussion for the
>current Unicode support we also thought that we wanted str() to do
>this (i.e. allow 8-bit and Unicode string returns), until we realized
>that there were too many places that would be very unhappy if str()
>returned a Unicode string!
>
>The problem is similar to a situation you have with numbers: sometimes
>you want a coercion that converts everything to float except it should
>leave complex numbers complex.  In other words it coerces up to float
>but it never coerces down to float.  Luckily you can write that as
>"x+0.0" while converts int and long to float with the same value while
>leaving complex alone.
>
>For strings there is no compact notation like "+0.0" if you want to
>convert to string or Unicode -- adding "" might work in Perl, but not
>in Python.
>
>I propose ustr(x) with the semantics given by Toby.  Class support (an
>__ustr__ method, with fallbacks on __str__ and __unicode__) would also
>be handy.


Toby Dickenson
tdickenson at geminidataloggers.com



From guido at beopen.com  Tue Sep  5 16:29:44 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 09:29:44 -0500
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
Message-ID: <200009051429.JAA19296@cj20424-a.reston1.va.home.com>

Folks,

After a Labor Day weekend ful excitement, I have good news and bad
news.

The good news is that both Python 1.6 and Python 2.0b1 will be
released today (in *some* US timezone :-).  The former from
python.org, the latter from pythonlabs.com.

The bad news is that there's still no agreement from Stallman that the
CNRI open source license is GPL-compatible.  See my previous post
here.  (Re: Conflict with the GPL.)  Given that we still don't know
that dual licensing will be necessary and sufficient to make the 2.0
license GPL-compatible, we decided not to go for dual licensing just
yet -- if it transpires later that it is necessary, we'll add it to
the 2.0 final license.

At this point, our best shot seems to be to arrange a meeting between
CNRI's lawyer and Stallman's lawyer.  Without the lawyers there, we
never seem to be able to get a commitment to an agreement.  CNRI is
willing to do this; Stallman's lawyer (Eben Moglen; he's a law
professor at Columbia U, not NYU as I previously mentioned) is even
harder to get a hold of than Stallman himself, so it may be a while.
Given CNRI's repeatedly expressed commitment to move this forward, I
don't want to hold up any of the releases that were planned for today
any longer.

So look forward to announcements later today, and get out the
(qualified) champagne...!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep  5 16:30:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 09:30:32 -0500
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
Message-ID: <200009051430.JAA19323@cj20424-a.reston1.va.home.com>

Folks,

After a Labor Day weekend ful excitement, I have good news and bad
news.

The good news is that both Python 1.6 and Python 2.0b1 will be
released today (in *some* US timezone :-).  The former from
python.org, the latter from pythonlabs.com.

The bad news is that there's still no agreement from Stallman that the
CNRI open source license is GPL-compatible.  See my previous post
here.  (Re: Conflict with the GPL.)  Given that we still don't know
that dual licensing will be necessary and sufficient to make the 2.0
license GPL-compatible, we decided not to go for dual licensing just
yet -- if it transpires later that it is necessary, we'll add it to
the 2.0 final license.

At this point, our best shot seems to be to arrange a meeting between
CNRI's lawyer and Stallman's lawyer.  Without the lawyers there, we
never seem to be able to get a commitment to an agreement.  CNRI is
willing to do this; Stallman's lawyer (Eben Moglen; he's a law
professor at Columbia U, not NYU as I previously mentioned) is even
harder to get a hold of than Stallman himself, so it may be a while.
Given CNRI's repeatedly expressed commitment to move this forward, I
don't want to hold up any of the releases that were planned for today
any longer.

So look forward to announcements later today, and get out the
(qualified) champagne...!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Tue Sep  5 16:17:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 16:17:36 +0200 (CEST)
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <200009051430.JAA19323@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 05, 2000 09:30:32 AM
Message-ID: <200009051417.QAA27424@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Folks,
> 
> After a Labor Day weekend ful excitement, I have good news and bad
> news.

Don'it worry about the bad news! :-)

> 
> The good news is that both Python 1.6 and Python 2.0b1 will be
> released today (in *some* US timezone :-).  The former from
> python.org, the latter from pythonlabs.com.

Great! w.r.t. the latest demand for help with patches, tell us
what & whom patches you want among those you know about.

> 
> The bad news is that there's still no agreement from Stallman that the
> CNRI open source license is GPL-compatible.

This is no surprise.  I don't think they will agree any time soon.
If they do so by the end of the year, that would make us happy, though.

> So look forward to announcements later today, and get out the
> (qualified) champagne...!

Ahem, which one?
Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Mill?sim?? :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Tue Sep  5 16:16:39 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 5 Sep 2000 09:16:39 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc NEWS,1.53,1.54
In-Reply-To: <200009051242.FAA13258@slayer.i.sourceforge.net>
References: <200009051242.FAA13258@slayer.i.sourceforge.net>
Message-ID: <14773.71.989338.110654@beluga.mojam.com>


    Guido> I could use help here!!!!  Please mail me patches ASAP.  We may have
    Guido> to put some of this off to 2.0final, but it's best to have it in shape
    Guido> now...

Attached.

Skip

-------------- next part --------------
A non-text attachment was scrubbed...
Name: news.patch
Type: application/octet-stream
Size: 539 bytes
Desc: note about readline history
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000905/3ad2996f/attachment-0001.obj>

From jeremy at beopen.com  Tue Sep  5 16:58:46 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 10:58:46 -0400 (EDT)
Subject: [Python-Dev] malloc restructuring in 1.6
Message-ID: <14773.2598.24665.940797@bitdiddle.concentric.net>

I'm editing the NEWS file for 2.0 and noticed that Vladimir's malloc
changes are listed as new for 2.0.  I think they actually went into
1.6, but I'm not certain.  Can anyone confirm?

Jeremy



From petrilli at amber.org  Tue Sep  5 17:19:05 2000
From: petrilli at amber.org (Christopher Petrilli)
Date: Tue, 5 Sep 2000 11:19:05 -0400
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <200009051417.QAA27424@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Sep 05, 2000 at 04:17:36PM +0200
References: <200009051430.JAA19323@cj20424-a.reston1.va.home.com> <200009051417.QAA27424@python.inrialpes.fr>
Message-ID: <20000905111904.A14540@trump.amber.org>

Vladimir Marangozov [Vladimir.Marangozov at inrialpes.fr] wrote:
> Ahem, which one?
> Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Mill?sim?? :-)

Given the involvement of Richard Stallman, and its similarity to a
peace accord during WWII, I'd vote for Pol Roger Sir Winston Churchill 
cuvee :-)

Chris

-- 
| Christopher Petrilli
| petrilli at amber.org



From Vladimir.Marangozov at inrialpes.fr  Tue Sep  5 17:38:47 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 17:38:47 +0200 (CEST)
Subject: [Python-Dev] malloc restructuring in 1.6
In-Reply-To: <14773.2598.24665.940797@bitdiddle.concentric.net> from "Jeremy Hylton" at Sep 05, 2000 10:58:46 AM
Message-ID: <200009051538.RAA27615@python.inrialpes.fr>

Jeremy Hylton wrote:
> 
> I'm editing the NEWS file for 2.0 and noticed that Vladimir's malloc
> changes are listed as new for 2.0.  I think they actually went into
> 1.6, but I'm not certain.  Can anyone confirm?

Yes, they're in 1.6.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Tue Sep  5 18:02:51 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 5 Sep 2000 18:02:51 +0200 (CEST)
Subject: [Python-Dev] License status and 1.6 and 2.0 releases
In-Reply-To: <20000905111904.A14540@trump.amber.org> from "Christopher Petrilli" at Sep 05, 2000 11:19:05 AM
Message-ID: <200009051602.SAA27759@python.inrialpes.fr>

Christopher Petrilli wrote:
> 
> Vladimir Marangozov [Vladimir.Marangozov at inrialpes.fr] wrote:
> > Ahem, which one?
> > Veuve Cliquot, Dom Perignon, Moet & Chandon or Taittinger Mill?sim?? :-)
> 
> Given the involvement of Richard Stallman, and its similarity to a
> peace accord during WWII, I'd vote for Pol Roger Sir Winston Churchill 
> cuvee :-)
> 

Ah. That would have been my pleasure, but I am out of stock for this one.
Sorry. However, I'll make sure to order a bottle and keep it ready in my
cellar for the ratification of the final license. In the meantime, the
above is the best I can offer -- the rest is cheap stuff to be consumed
only on bad news <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From jeremy at beopen.com  Tue Sep  5 20:43:04 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 14:43:04 -0400 (EDT)
Subject: [Python-Dev] checkin messages that reference SF bugs or patches
Message-ID: <14773.16056.958855.185889@bitdiddle.concentric.net>

If you commit a change that closes an SF bug or patch, please write a
checkin message that describes the change independently of the
information stored in SF.  You should also reference the bug or patch
id, but the id alone is not sufficient.

I am working on the NEWS file for Python 2.0 and have found a few
checkin messages that just said "SF patch #010101."  It's tedious to
go find the closed patch entry and read all the discussion.  Let's
assume the person reading the CVS log does not have access to the SF
databases. 

Jeremy



From akuchlin at mems-exchange.org  Tue Sep  5 20:57:05 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 5 Sep 2000 14:57:05 -0400
Subject: [Python-Dev] Updated version of asyncore.py?
Message-ID: <20000905145705.A2512@kronos.cnri.reston.va.us>

asyncore.py in the CVS tree is revision 2.40 1999/05/27, while Sam
Rushing's most recent tarball contains revision 2.49 2000/05/04.  The
major change is that lots of methods in 2.49 have an extra optional
argument, map=None.  (I noticed the discrepancy while packaging ZEO,
which assumes the most recent version.)

asynchat.py is also slightly out of date: 
< #     Id: asynchat.py,v 2.23 1999/05/01 04:49:24 rushing Exp
---
> #     $Id: asynchat.py,v 2.25 1999/11/18 11:01:08 rushing Exp $

The CVS versions have additional docstrings and a few typo fixes in
comments.  Should the Python library versions be updated?  (+1 from
me, obviously.)

--amk



From martin at loewis.home.cs.tu-berlin.de  Tue Sep  5 22:46:16 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 5 Sep 2000 22:46:16 +0200
Subject: [Python-Dev] Re: urllib.URLopener does not work with proxies (Bug 110692)
Message-ID: <200009052046.WAA03605@loewis.home.cs.tu-berlin.de>

Hi Andrew,

This is likely incorrect usage of the module. The proxy argument must
be a dictionary mapping strings of protocol names to  strings of URLs.

Please confirm whether this was indeed the problem; if not, please add
more detail as to how exactly you had used the module.

See

http://sourceforge.net/bugs/?func=detailbug&bug_id=110692&group_id=5470

for the status of this report; it would be appreciated if you recorded
any comments on that page.

Regards,
Martin




From guido at cj20424-a.reston1.va.home.com  Tue Sep  5 20:49:38 2000
From: guido at cj20424-a.reston1.va.home.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 13:49:38 -0500
Subject: [Python-Dev] Python 1.6, the final release, is out!
Message-ID: <200009051849.NAA01719@cj20424-a.reston1.va.home.com>

------- Blind-Carbon-Copy

To: python-list at python.org (Python mailing list),
    python-announce-list at python.org
Subject: Python 1.6, the final release, is out!
From: Guido van Rossum <guido at beopen.com>
Date: Tue, 05 Sep 2000 13:49:38 -0500
Sender: guido at cj20424-a.reston1.va.home.com

OK folks, believe it or not, Python 1.6 is released.

Please go here to pick it up:

    http://www.python.org/1.6/

There's a tarball and a Windows installer, and a long list of new
features.

CNRI has placed an open source license on this version.  CNRI believes
that this version is compatible with the GPL, but there is a
technicality concerning the choice of law provision, which Richard
Stallman believes may make it incompatible.  CNRI is still trying to
work this out with Stallman.  Future versions of Python will be
released by BeOpen PythonLabs under a GPL-compatible license if at all
possible.

There's Only One Way To Do It.

- --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

------- End of Blind-Carbon-Copy



From martin at loewis.home.cs.tu-berlin.de  Wed Sep  6 00:03:16 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 6 Sep 2000 00:03:16 +0200
Subject: [Python-Dev] undefined symbol in custom interpeter (Bug 110701)
Message-ID: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>

Your PR is now being tracked at

http://sourceforge.net/bugs/?func=detailbug&bug_id=110701&group_id=5470

This is not a bug in Python. When linking a custom interpreter, you
need to make sure all symbols are exported to modules. On FreeBSD, you
do this by adding -Wl,--export-dynamic to the linker line.

Can someone please close this report?

Martin



From jeremy at beopen.com  Wed Sep  6 00:20:07 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 5 Sep 2000 18:20:07 -0400 (EDT)
Subject: [Python-Dev] undefined symbol in custom interpeter (Bug 110701)
In-Reply-To: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>
References: <200009052203.AAA04445@loewis.home.cs.tu-berlin.de>
Message-ID: <14773.29079.142749.496111@bitdiddle.concentric.net>

Closed it.  Thanks.

Jeremy



From skip at mojam.com  Wed Sep  6 00:38:02 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 5 Sep 2000 17:38:02 -0500 (CDT)
Subject: [Python-Dev] Updated version of asyncore.py?
In-Reply-To: <20000905145705.A2512@kronos.cnri.reston.va.us>
References: <20000905145705.A2512@kronos.cnri.reston.va.us>
Message-ID: <14773.30154.924465.632830@beluga.mojam.com>

    Andrew> The CVS versions have additional docstrings and a few typo fixes
    Andrew> in comments.  Should the Python library versions be updated?
    Andrew> (+1 from me, obviously.)

+1 from me as well.  I think asyncore.py and asynchat.py are important
enough to a number of packages that we ought to make the effort to keep the
Python-distributed versions up-to-date.  I suspect adding Sam as a developer
would make keeping it updated in CVS much easier than in the past.

Skip



From guido at beopen.com  Wed Sep  6 06:49:27 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 05 Sep 2000 23:49:27 -0500
Subject: [Python-Dev] Python 2.0b1 is released!
Message-ID: <200009060449.XAA02145@cj20424-a.reston1.va.home.com>

A unique event in all the history of Python: two releases on the same
day!  (At least in my timezone. :-)

Python 2.0b1 is released.  The BeOpen PythonLabs and our cast of
SourceForge volunteers have been working on this version on which
since May.  Please go here to pick it up:

    http://www.pythonlabs.com/tech/python2.0/

There's a tarball and a Windows installer, online documentation (with
a new color scheme :-), RPMs, and a long list of new features.  OK, a
teaser:

  - Augmented assignment, e.g. x += 1
  - List comprehensions, e.g. [x**2 for x in range(10)]
  - Extended import statement, e.g. import Module as Name
  - Extended print statement, e.g. print >> file, "Hello"
  - Optional collection of cyclical garbage

There's one bit of sad news: according to Richard Stallman, this
version is no more compatible with the GPL than version 1.6 that was
released this morning by CNRI, because of a technicality concerning
the choice of law provision in the CNRI license.  Because 2.0b1 has to
be considered a derivative work of 1.6, this technicality in the CNRI
license applies to 2.0 too (and to any other derivative works of 1.6).
CNRI is still trying to work this out with Stallman, so I hope that we
will be able to release future versions of Python under a
GPL-compatible license.

There's Only One Way To Do It.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at fnal.gov  Wed Sep  6 16:31:11 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 09:31:11 -0500 (CDT)
Subject: [Python-Dev] newimp.py
Message-ID: <14774.21807.691920.988409@buffalo.fnal.gov>

Installing the brand-new 2.0b1 I see this:

Compiling /usr/lib/python2.0/newimp.py ...
  File "/usr/lib/python2.0/newimp.py", line 137
    envDict[varNm] = val
                        ^
And attempting to import it gives me:

Python 2.0b1 (#14, Sep  6 2000, 09:24:44) 
[GCC 2.96 20000905 (experimental)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import newimp
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/newimp.py", line 1567, in ?
    init()
  File "/usr/lib/python2.0/newimp.py", line 203, in init
    if (not aMod.__dict__.has_key(PKG_NM)) or full_reset:
AttributeError: 'None' object has no attribute '__dict__'

This code was last touched on 1995/07/12.  It looks defunct to me.
Should it be removed from the distribution or should I spend the time
to fix it?





From skip at mojam.com  Wed Sep  6 17:12:56 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 6 Sep 2000 10:12:56 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.21807.691920.988409@buffalo.fnal.gov>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
Message-ID: <14774.24312.78161.249542@beluga.mojam.com>

    Charles> This code was last touched on 1995/07/12.  It looks defunct to
    Charles> me.  Should it be removed from the distribution or should I
    Charles> spend the time to fix it?

Charles,

Try deleting /usr/lib/python2.0/newimp.py, then do a re-install.  (Actually,
perhaps you should delete *.py in that directory and selectively delete
subdirectories as well.)  I don't see newimp.py anywhere in the 2.0b1 tree.

Skip



From cgw at fnal.gov  Wed Sep  6 19:56:44 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 12:56:44 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.24312.78161.249542@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
Message-ID: <14774.34140.432485.450929@buffalo.fnal.gov>

Skip Montanaro writes:

 > Try deleting /usr/lib/python2.0/newimp.py, then do a re-install.  (Actually,
 > perhaps you should delete *.py in that directory and selectively delete
 > subdirectories as well.)  I don't see newimp.py anywhere in the 2.0b1 tree.

Something is really screwed up with CVS, or my understanding of it.
Look at this transcript:

buffalo:Lib$ pwd
/usr/local/src/Python-CVS/python/dist/src/Lib

buffalo:Lib$ rm newimp.py                                                      

buffalo:Lib$ cvs status newimp.py                                              
===================================================================
File: no file newimp.py         Status: Needs Checkout

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/newimp.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)

buffalo:Lib$ cvs update -dAP                                                   
cvs server: Updating .
U newimp.py
<rest of update output omitted>

buffalo:Lib$ ls -l newimp.py                                                   
-rwxr-xr-x   1 cgw      g023        54767 Sep  6 12:50 newimp.py

buffalo:Lib$ cvs status newimp.py 
===================================================================
File: newimp.py         Status: Up-to-date

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/newimp.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)

If I edit the CVS/Entries file and remove "newimp.py" from there, the
problem goes away.  But I work with many CVS repositories, and the
Python repository at SourceForge is the only one that forces me to
manually edit the Entries file.  You're really not supposed to need to
do that!

I'm running CVS version 1.10.6.  I think 1.10.6 is supposed to be a
"good" version to use.  What are other people using?  Does everybody
just go around editing CVS/Entries whenever files are removed from the
repository?  What am I doing wrong?  I'm starting to get a little
annoyed by the SourceForge CVS server.  Is it just me?







From nascheme at enme.ucalgary.ca  Wed Sep  6 20:06:29 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 6 Sep 2000 12:06:29 -0600
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.34140.432485.450929@buffalo.fnal.gov>; from Charles G Waldman on Wed, Sep 06, 2000 at 12:56:44PM -0500
References: <14774.21807.691920.988409@buffalo.fnal.gov> <14774.24312.78161.249542@beluga.mojam.com> <14774.34140.432485.450929@buffalo.fnal.gov>
Message-ID: <20000906120629.B1977@keymaster.enme.ucalgary.ca>

On Wed, Sep 06, 2000 at 12:56:44PM -0500, Charles G Waldman wrote:
> Something is really screwed up with CVS, or my understanding of it.

The latter I believe unless I completely misunderstand your transcript.
Look at "cvs remove".

  Neil



From cgw at fnal.gov  Wed Sep  6 20:19:50 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 13:19:50 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <20000906120629.B1977@keymaster.enme.ucalgary.ca>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
Message-ID: <14774.35526.470896.324060@buffalo.fnal.gov>

Neil wrote:
 
 >Look at "cvs remove".

Sorry, I must have my "stupid" bit set today (didn't sleep enough last
night).  Do you mean that I'm supposed to cvs remove the file?  AFAIK,
when I do a "cvs update" that should remove all files that are no
longer pertinent.  Guido (or somebody else with CVS write access) does
the "cvs remove" and "cvs commit", and then when I do my next 
"cvs update" my local copy of the file should be removed.  At least
that's the way it works with all the other projects I track via CVS.

And of course if I try to "cvs remove newimp.py", I get: 

cvs [server aborted]: "remove" requires write access to the repository

as I would expect.

Or are you simply telling me that if I read the documentation on the
"cvs remove" command, the scales will fall from my eyes?  I've read
it, and it doesn't help :-(

Sorry for bugging everybody with my stupid CVS questions.  But I do
really think that something is screwy with the CVS repository.  And
I've never seen *any* documentation which suggests that you need to
manually edit the CVS/Entries file, which was Fred Drake's suggested
fix last time I reported such a problem with CVS.

Oh well, if this only affects me, then I guess the burden of proof is
on me.  Meanwhile I guess I just have to remember that I can't really
trust CVS to delete obsolete files.






From skip at mojam.com  Wed Sep  6 20:49:56 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 6 Sep 2000 13:49:56 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.35526.470896.324060@buffalo.fnal.gov>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
Message-ID: <14774.37332.534262.200618@beluga.mojam.com>

    Charles> Oh well, if this only affects me, then I guess the burden of
    Charles> proof is on me.  Meanwhile I guess I just have to remember that
    Charles> I can't really trust CVS to delete obsolete files.

Charles,

I'm not sure what to make of your problem.  I can't reproduce it.  On the
Linux systems from which I track the CVS repository, I run cvs 1.10.6,
1.10.7 and 1.10.8 and haven't had seen the problem you describe.  I checked
six different Python trees on four different machines for evidence of
Lib/newimp.py.  One of the trees still references cvs.python.org and hasn't
been updated since September 4, 1999.  Even it doesn't have a Lib/newimp.py
file.  I believe the demise of Lib/newimp.py predates the creation of the
SourceForge CVS repository by quite awhile.

You might try executing cvs checkout in a fresh directory and compare that
with your problematic tree.

Skip



From cgw at fnal.gov  Wed Sep  6 21:10:48 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 14:10:48 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.37332.534262.200618@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
	<14774.37332.534262.200618@beluga.mojam.com>
Message-ID: <14774.38584.869242.974864@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 > I'm not sure what to make of your problem.  I can't reproduce it.  On the
 > Linux systems from which I track the CVS repository, I run cvs 1.10.6,
 > 1.10.7 and 1.10.8 and haven't had seen the problem you describe.

How about if you go to one of those CVS trees, cd Lib, and type
"cvs update newimp.py" ?

If I check out a new tree, "newimp.py" is indeed not there.  But if I
do "cvs update newimp.py" it appears.  I am sure that this is *not*
the correct behavior for CVS.  If a file has been cvs remove'd, then
updating it should not cause it to appear in my local repository.






From cgw at fnal.gov  Wed Sep  6 22:40:47 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 15:40:47 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.43898.548664.200202@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
	<14774.37332.534262.200618@beluga.mojam.com>
	<14774.38584.869242.974864@buffalo.fnal.gov>
	<14774.43898.548664.200202@beluga.mojam.com>
Message-ID: <14774.43983.70263.934682@buffalo.fnal.gov>

Skip Montanaro writes:
 > 
 >     Charles> How about if you go to one of those CVS trees, cd Lib, and type
 >     Charles> "cvs update newimp.py" ?
 > 
 > I get 
 > 
 >     beluga:Lib% cd ~/src/python/dist/src/Lib/
 >     beluga:Lib% cvs update newinp.py
 >     cvs server: nothing known about newinp.py

That's because you typed "newinp", not "newimp".  Try it with an "M"
and see what happens.

    -C




From effbot at telia.com  Wed Sep  6 23:17:37 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 6 Sep 2000 23:17:37 +0200
Subject: [Python-Dev] newimp.py
References: <14774.21807.691920.988409@buffalo.fnal.gov><14774.24312.78161.249542@beluga.mojam.com><14774.34140.432485.450929@buffalo.fnal.gov><20000906120629.B1977@keymaster.enme.ucalgary.ca><14774.35526.470896.324060@buffalo.fnal.gov><14774.37332.534262.200618@beluga.mojam.com><14774.38584.869242.974864@buffalo.fnal.gov><14774.43898.548664.200202@beluga.mojam.com> <14774.43983.70263.934682@buffalo.fnal.gov>
Message-ID: <04bd01c01847$e9a197c0$766940d5@hagrid>

charles wrote:
>  >     Charles> How about if you go to one of those CVS trees, cd Lib, and type
>  >     Charles> "cvs update newimp.py" ?

why do you keep doing that? ;-)

> That's because you typed "newinp", not "newimp".  Try it with an "M"
> and see what happens.

the file has state "Exp".  iirc, it should be "dead" for CVS
to completely ignore it.

guess it was removed long before the CVS repository was
moved to source forge, and that something went wrong
somewhere in the process...

</F>




From cgw at fnal.gov  Wed Sep  6 23:08:09 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Wed, 6 Sep 2000 16:08:09 -0500 (CDT)
Subject: [Python-Dev] newimp.py
In-Reply-To: <14774.44642.258108.758548@beluga.mojam.com>
References: <14774.21807.691920.988409@buffalo.fnal.gov>
	<14774.24312.78161.249542@beluga.mojam.com>
	<14774.34140.432485.450929@buffalo.fnal.gov>
	<20000906120629.B1977@keymaster.enme.ucalgary.ca>
	<14774.35526.470896.324060@buffalo.fnal.gov>
	<14774.37332.534262.200618@beluga.mojam.com>
	<14774.38584.869242.974864@buffalo.fnal.gov>
	<14774.43898.548664.200202@beluga.mojam.com>
	<14774.43983.70263.934682@buffalo.fnal.gov>
	<14774.44642.258108.758548@beluga.mojam.com>
Message-ID: <14774.45625.177110.349575@buffalo.fnal.gov>

Skip Montanaro writes:

 > Ah, yes, I get something:
 > 
 >     beluga:Lib% cvs update newimp.py
 >     U newimp.py
 >     beluga:Lib% ls -l newimp.py 
 >     -rwxrwxr-x    1 skip     skip        54767 Jul 12  1995 newimp.py

 > Why newimp.py is still available, I have no idea.  Note the beginning of the
 > module's doc string:

It's clear that the file is quite obsolete.  It's been moved to the
Attic, and the most recent tag on it is r13beta1.

What's not clear is why "cvs update" still fetches it.

Something is way screwy with SourceForge's CVS server, I'm tellin' ya!

Maybe it's running on a Linux box and uses the pthreads library?  ;-)

I guess since the server is at SourceForge, it's not really under
immediate control of anybody at either python.org or
BeOpen/PythonLabs, so it doesn't seem very likely that this will get
looked into anytime soon.  Sigh....






From guido at beopen.com  Thu Sep  7 05:07:09 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 06 Sep 2000 22:07:09 -0500
Subject: [Python-Dev] newimp.py
In-Reply-To: Your message of "Wed, 06 Sep 2000 23:17:37 +0200."
             <04bd01c01847$e9a197c0$766940d5@hagrid> 
References: <14774.21807.691920.988409@buffalo.fnal.gov><14774.24312.78161.249542@beluga.mojam.com><14774.34140.432485.450929@buffalo.fnal.gov><20000906120629.B1977@keymaster.enme.ucalgary.ca><14774.35526.470896.324060@buffalo.fnal.gov><14774.37332.534262.200618@beluga.mojam.com><14774.38584.869242.974864@buffalo.fnal.gov><14774.43898.548664.200202@beluga.mojam.com> <14774.43983.70263.934682@buffalo.fnal.gov>  
            <04bd01c01847$e9a197c0$766940d5@hagrid> 
Message-ID: <200009070307.WAA07393@cj20424-a.reston1.va.home.com>

> the file has state "Exp".  iirc, it should be "dead" for CVS
> to completely ignore it.
> 
> guess it was removed long before the CVS repository was
> moved to source forge, and that something went wrong
> somewhere in the process...

Could've been an old version of CVS.

Anyway, I checked it out, rm'ed it, cvs-rm'ed it, and committed it --
that seems to have taken care of it.

I hope the file wasn't in any beta distribution.  Was it?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From sjoerd at oratrix.nl  Thu Sep  7 12:40:28 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Thu, 07 Sep 2000 12:40:28 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules cPickle.c,2.50,2.51
In-Reply-To: Your message of Wed, 06 Sep 2000 17:11:43 -0700.
             <200009070011.RAA09907@slayer.i.sourceforge.net> 
References: <200009070011.RAA09907@slayer.i.sourceforge.net> 
Message-ID: <20000907104029.2B35031047C@bireme.oratrix.nl>

This doesn't work.  Neither m nor d are initialized at this point.

On Wed, Sep 6 2000 Guido van Rossum wrote:

> Update of /cvsroot/python/python/dist/src/Modules
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv9746
> 
> Modified Files:
> 	cPickle.c 
> Log Message:
> Simple fix from Jin Fulton to avoid returning a half-initialized
> module when e.g. copy_reg.py doesn't exist.  This caused a core dump.
> 
> This closes SF bug 112944.
> 
> 
> Index: cPickle.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Modules/cPickle.c,v
> retrieving revision 2.50
> retrieving revision 2.51
> diff -C2 -r2.50 -r2.51
> *** cPickle.c	2000/08/12 20:58:11	2.50
> --- cPickle.c	2000/09/07 00:11:40	2.51
> ***************
> *** 4522,4525 ****
> --- 4522,4527 ----
>       PyObject *compatible_formats;
>   
> +     if (init_stuff(m, d) < 0) return;
> + 
>       Picklertype.ob_type = &PyType_Type;
>       Unpicklertype.ob_type = &PyType_Type;
> ***************
> *** 4543,4547 ****
>       Py_XDECREF(format_version);
>       Py_XDECREF(compatible_formats);
> - 
> -     init_stuff(m, d);
>   }
> --- 4545,4547 ----
> 
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From thomas.heller at ion-tof.com  Thu Sep  7 15:42:01 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Thu, 7 Sep 2000 15:42:01 +0200
Subject: [Python-Dev] SF checkin policies
Message-ID: <02a401c018d1$669fbcf0$4500a8c0@thomasnb>

What are the checkin policies to the sourceforge
CVS repository?

Now that I have checkin rights (for the distutils),
I'm about to checkin new versions of the bdist_wininst
command. This is still under active development.

Should CVS only contain complete, working versions?
Or are intermediate, nonworking versions allowed?
Will a warning be given here on python-dev just before
a new (beta) distribution is created?

Thomas Heller






From fredrik at pythonware.com  Thu Sep  7 16:04:13 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 16:04:13 +0200
Subject: [Python-Dev] SF checkin policies
References: <02a401c018d1$669fbcf0$4500a8c0@thomasnb>
Message-ID: <025501c018d4$81301800$0900a8c0@SPIFF>

> What are the checkin policies to the sourceforge
> CVS repository?

http://python.sourceforge.net/peps/pep-0200.html

    Use good sense when committing changes.  You should know what we
    mean by good sense or we wouldn't have given you commit privileges
    <0.5 wink>.

    /.../

    Any significant new feature must be described in a PEP and
    approved before it is checked in.

    /.../

    Any significant code addition, such as a new module or large
    patch, must include test cases for the regression test and
    documentation.  A patch should not be checked in until the tests
    and documentation are ready.

    /.../

    It is not acceptable for any checked in code to cause the
    regression test to fail.  If a checkin causes a failure, it must
    be fixed within 24 hours or it will be backed out.

</F>




From guido at beopen.com  Thu Sep  7 17:50:25 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 10:50:25 -0500
Subject: [Python-Dev] SF checkin policies
In-Reply-To: Your message of "Thu, 07 Sep 2000 15:42:01 +0200."
             <02a401c018d1$669fbcf0$4500a8c0@thomasnb> 
References: <02a401c018d1$669fbcf0$4500a8c0@thomasnb> 
Message-ID: <200009071550.KAA09309@cj20424-a.reston1.va.home.com>

> What are the checkin policies to the sourceforge
> CVS repository?
> 
> Now that I have checkin rights (for the distutils),
> I'm about to checkin new versions of the bdist_wininst
> command. This is still under active development.
> 
> Should CVS only contain complete, working versions?
> Or are intermediate, nonworking versions allowed?
> Will a warning be given here on python-dev just before
> a new (beta) distribution is created?

Please check in only working, tested code!  There are lots of people
(also outside the developers group) who do daily checkouts.  If they
get broken code, they'll scream hell!

We publicize and discuss the release schedule pretty intensely here so
you should have plenty of warning.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Thu Sep  7 17:59:40 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 7 Sep 2000 17:59:40 +0200 (CEST)
Subject: [Python-Dev] newimp.py
In-Reply-To: <200009070307.WAA07393@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 06, 2000 10:07:09 PM
Message-ID: <200009071559.RAA06832@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Anyway, I checked it out, rm'ed it, cvs-rm'ed it, and committed it --
> that seems to have taken care of it.
> 
> I hope the file wasn't in any beta distribution.  Was it?

No. There's a .cvsignore file in the root directory of the latest
tarball, though. Not a big deal.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Thu Sep  7 18:46:11 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 7 Sep 2000 18:46:11 +0200 (CEST)
Subject: [Python-Dev] python -U fails
Message-ID: <200009071646.SAA07004@python.inrialpes.fr>

Seen on c.l.py (import site fails due to eval on an unicode string):

~/python/Python-2.0b1>python -U
'import site' failed; use -v for traceback
Python 2.0b1 (#2, Sep  7 2000, 12:59:53) 
[GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> eval (u"1+2")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: eval() argument 1 must be string or code object
>>> 

The offending eval is in os.py

Traceback (most recent call last):
  File "./Lib/site.py", line 60, in ?
    import sys, os
  File "./Lib/os.py", line 331, in ?
    if _exists("fork") and not _exists("spawnv") and _exists("execv"):
  File "./Lib/os.py", line 325, in _exists
    eval(name)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From akuchlin at mems-exchange.org  Thu Sep  7 22:01:44 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 07 Sep 2000 16:01:44 -0400
Subject: [Python-Dev] hasattr() and Unicode strings
Message-ID: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>

hasattr(), getattr(), and doubtless other built-in functions
don't accept Unicode strings at all:

>>> import sys
>>> hasattr(sys, u'abc')
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: hasattr, argument 2: expected string, unicode found

Is this a bug or a feature?  I'd say bug; the Unicode should be
coerced using the default ASCII encoding, and an exception raised if
that isn't possible.

--amk



From fdrake at beopen.com  Thu Sep  7 22:02:52 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 7 Sep 2000 16:02:52 -0400 (EDT)
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us>
Message-ID: <14775.62572.442732.589738@cj42289-a.reston1.va.home.com>

Andrew Kuchling writes:
 > Is this a bug or a feature?  I'd say bug; the Unicode should be
 > coerced using the default ASCII encoding, and an exception raised if
 > that isn't possible.

  I agree.
  Marc-Andre, what do you think?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From martin at loewis.home.cs.tu-berlin.de  Thu Sep  7 22:08:45 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 7 Sep 2000 22:08:45 +0200
Subject: [Python-Dev] xml missing in Windows installer?
Message-ID: <200009072008.WAA00862@loewis.home.cs.tu-berlin.de>

Using the 2.0b1 Windows installer from BeOpen, I could not find
Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
this intentional? Did I miss something?

Regargds,
Martin




From effbot at telia.com  Thu Sep  7 22:25:02 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 22:25:02 +0200
Subject: [Python-Dev] xml missing in Windows installer?
References: <200009072008.WAA00862@loewis.home.cs.tu-berlin.de>
Message-ID: <004c01c01909$b832a220$766940d5@hagrid>

martin wrote:

> Using the 2.0b1 Windows installer from BeOpen, I could not find
> Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
> this intentional? Did I miss something?

Date: Thu, 7 Sep 2000 01:34:04 -0700
From: Tim Peters <tim_one at users.sourceforge.net>
To: python-checkins at python.org
Subject: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.15,1.16

Update of /cvsroot/python/python/dist/src/PCbuild
In directory slayer.i.sourceforge.net:/tmp/cvs-serv31884

Modified Files:
 python20.wse 
Log Message:
Windows installer, reflecting changes that went into a replacement 2.0b1
.exe that will show up on PythonLabs.com later today:
    Include the Lib\xml\ package (directory + subdirectories).
    Include the Lib\lib-old\ directory.
    Include the Lib\test\*.xml test cases (well, just one now).
    Remove the redundant install of Lib\*.py (looks like a stray duplicate
        line that's been there a long time).  Because of this, the new
        installer is a little smaller despite having more stuff in it.

...

</F>




From guido at beopen.com  Thu Sep  7 23:32:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 16:32:16 -0500
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: Your message of "Thu, 07 Sep 2000 16:01:44 -0400."
             <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> 
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> 
Message-ID: <200009072132.QAA10047@cj20424-a.reston1.va.home.com>

> hasattr(), getattr(), and doubtless other built-in functions
> don't accept Unicode strings at all:
> 
> >>> import sys
> >>> hasattr(sys, u'abc')
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> TypeError: hasattr, argument 2: expected string, unicode found
> 
> Is this a bug or a feature?  I'd say bug; the Unicode should be
> coerced using the default ASCII encoding, and an exception raised if
> that isn't possible.

Agreed.

There are probably a bunch of things that need to be changed before
thois works though; getattr() c.s. require a string, then call
PyObject_GetAttr() which also checks for a string unless the object
supports tp_getattro -- but that's only true for classes and
instances.

Also, should we convert the string to 8-bit, or should we allow
Unicode attribute names?

It seems there's no easy fix -- better address this after 2.0 is
released.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Sep  7 22:26:28 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 7 Sep 2000 22:26:28 +0200
Subject: [Python-Dev] Naming of config.h
Message-ID: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>

The fact that Python installs its config.h as
<prefix>/python2.0/config.h is annoying if one tries to combine Python
with some other autoconfiscated package.

If you configure that other package, it detects that it needs to add
-I/usr/local/include/python2.0; it also provides its own
config.h. When compiling the files

#include "config.h"

could then mean either one or the other. That can cause quite some
confusion: if the one of the package is used, LONG_LONG might not
exist, even though it should on that port.

This issue can be relaxed by renaming the "config.h" to
"pyconfig.h". That still might result in duplicate defines, but likely
SIZE_FLOAT (for example) has the same value in all definitions.

Regards,
Martin




From gstein at lyra.org  Thu Sep  7 22:41:12 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 7 Sep 2000 13:41:12 -0700
Subject: [Python-Dev] Naming of config.h
In-Reply-To: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>; from martin@loewis.home.cs.tu-berlin.de on Thu, Sep 07, 2000 at 10:26:28PM +0200
References: <200009072026.WAA01094@loewis.home.cs.tu-berlin.de>
Message-ID: <20000907134112.W3278@lyra.org>

On Thu, Sep 07, 2000 at 10:26:28PM +0200, Martin v. Loewis wrote:
>...
> This issue can be relaxed by renaming the "config.h" to
> "pyconfig.h". That still might result in duplicate defines, but likely
> SIZE_FLOAT (for example) has the same value in all definitions.

This is not a simple problem. APR (a subcomponent of Apache) is set up to
build as an independent library. It is also autoconf'd, but it goes through
a *TON* of work to avoid passing any autoconf symbols into the public space.

Renaming the config.h file would be an interesting start, but it won't solve
the conflicting symbols (or typedefs!) problem. And from a portability
standpoint, that is important: some compilers don't like redefinitions, even
if they are the same.

IOW, if you want to make this "correct", then plan on setting aside a good
chunk of time.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From guido at beopen.com  Thu Sep  7 23:57:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 16:57:39 -0500
Subject: [Python-Dev] newimp.py
In-Reply-To: Your message of "Thu, 07 Sep 2000 17:59:40 +0200."
             <200009071559.RAA06832@python.inrialpes.fr> 
References: <200009071559.RAA06832@python.inrialpes.fr> 
Message-ID: <200009072157.QAA10441@cj20424-a.reston1.va.home.com>

> No. There's a .cvsignore file in the root directory of the latest
> tarball, though. Not a big deal.

Typically we leave all the .cvsignore files in.  They don't hurt
anybody, and getting rid of them manually is just a pain.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Thu Sep  7 23:27:03 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 7 Sep 2000 17:27:03 -0400
Subject: [Python-Dev] hasattr() and Unicode strings
In-Reply-To: <200009072132.QAA10047@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 07, 2000 at 04:32:16PM -0500
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <200009072132.QAA10047@cj20424-a.reston1.va.home.com>
Message-ID: <20000907172703.A1095@kronos.cnri.reston.va.us>

On Thu, Sep 07, 2000 at 04:32:16PM -0500, Guido van Rossum wrote:
>It seems there's no easy fix -- better address this after 2.0 is
>released.

OK; I'll file a bug report on SourceForge so this doesn't get forgotten.

--amk



From fdrake at beopen.com  Thu Sep  7 23:26:18 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 7 Sep 2000 17:26:18 -0400 (EDT)
Subject: [Python-Dev] New PDF documentation & Windows installer
Message-ID: <14776.2042.985615.611778@cj42289-a.reston1.va.home.com>

  As many people noticed, there was a problem with the PDF files
generated for the recent Python 2.0b1 release.  I've found & corrected
the problem, and uploaded new packages to the Web site.  Please get
new PDF files from:

	http://www.pythonlabs.com/tech/python2.0/download.html

  The new files show a date of September 7, 2000, rather than
September 5, 2000.
  An updated Windows installer is available which actually installs
the XML package.
  I'm sorry for any inconvenience these problems have caused.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Thu Sep  7 23:43:28 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 7 Sep 2000 23:43:28 +0200
Subject: [Python-Dev] update: tkinter problems on win95
Message-ID: <004101c01914$ae501ca0$766940d5@hagrid>

just fyi, I've now reduced the problem to two small C programs:
one program initializes Tcl and Tk in the same way as Tkinter --
and the program hangs in the same way as Tkinter (most likely
inside some finalization code that's called from DllMain).

the other does things in the same way as wish, and it never
hangs...

:::

still haven't figured out exactly what's different, but it's clearly
a problem with _tkinter's initialization code, and nothing else.  I'll
post a patch as soon as I have one...

</F>




From barry at scottb.demon.co.uk  Fri Sep  8 01:02:32 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 8 Sep 2000 00:02:32 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <004c01c01909$b832a220$766940d5@hagrid>
Message-ID: <000901c0191f$b48d65e0$060210ac@private>

Please don't release new kits with identical names/versions as old kits.

How do you expect anyone to tell if they have the fix or not?

Finding and fixing bugs show you care about quality.
Stealth releases negate the benefit.

	Barry


> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Fredrik Lundh
> Sent: 07 September 2000 21:25
> To: Martin v. Loewis
> Cc: python-dev at python.org
> Subject: Re: [Python-Dev] xml missing in Windows installer?
> 
> 
> martin wrote:
> 
> > Using the 2.0b1 Windows installer from BeOpen, I could not find
> > Lib/xml afterwards, whereas the .tgz does contain the xml package. Was
> > this intentional? Did I miss something?
> 
> Date: Thu, 7 Sep 2000 01:34:04 -0700
> From: Tim Peters <tim_one at users.sourceforge.net>
> To: python-checkins at python.org
> Subject: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.15,1.16
> 
> Update of /cvsroot/python/python/dist/src/PCbuild
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv31884
> 
> Modified Files:
>  python20.wse 
> Log Message:
> Windows installer, reflecting changes that went into a replacement 2.0b1
> .exe that will show up on PythonLabs.com later today:
>     Include the Lib\xml\ package (directory + subdirectories).
>     Include the Lib\lib-old\ directory.
>     Include the Lib\test\*.xml test cases (well, just one now).
>     Remove the redundant install of Lib\*.py (looks like a stray duplicate
>         line that's been there a long time).  Because of this, the new
>         installer is a little smaller despite having more stuff in it.
> 
> ...
> 
> </F>
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From gward at mems-exchange.org  Fri Sep  8 01:16:56 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 7 Sep 2000 19:16:56 -0400
Subject: [Python-Dev] Noisy test_gc
Message-ID: <20000907191655.A9664@ludwig.cnri.reston.va.us>

Just built 2.0b1, and noticed that the GC test script is rather noisy:

  ...
  test_gc
  gc: collectable <list 0x818cf54>
  gc: collectable <dictionary 0x822f8b4>
  gc: collectable <list 0x818cf54>
  gc: collectable <tuple 0x822f484>
  gc: collectable <class 0x822f8b4>
  gc: collectable <dictionary 0x822f8e4>
  gc: collectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fb6c>
  gc: collectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fb9c>
  gc: collectable <instance method 0x81432bc>
  gc: collectable <B instance at 0x822f0d4>
  gc: collectable <dictionary 0x822fc9c>
  gc: uncollectable <dictionary 0x822fc34>
  gc: uncollectable <A instance at 0x818cf54>
  gc: collectable <dictionary 0x822fbcc>
  gc: collectable <function 0x8230fb4>
  test_gdbm
  ...

which is the same as it was the last time I built from CVS, but I would
have thought this should go away for a real release...

        Greg



From guido at beopen.com  Fri Sep  8 03:07:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 07 Sep 2000 20:07:58 -0500
Subject: [Python-Dev] GPL license issues hit Linux Today
Message-ID: <200009080107.UAA11841@cj20424-a.reston1.va.home.com>

http://linuxtoday.com/news_story.php3?ltsn=2000-09-07-001-21-OS-CY-DB

Plus my response

http://linuxtoday.com/news_story.php3?ltsn=2000-09-07-011-21-OS-CY-SW

I'll be off until Monday, relaxing at the beach!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 02:14:07 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 02:14:07 +0200 (CEST)
Subject: [Python-Dev] Noisy test_gc
In-Reply-To: <20000907191655.A9664@ludwig.cnri.reston.va.us> from "Greg Ward" at Sep 07, 2000 07:16:56 PM
Message-ID: <200009080014.CAA07599@python.inrialpes.fr>

Greg Ward wrote:
> 
> Just built 2.0b1, and noticed that the GC test script is rather noisy:

The GC patch at SF makes it silent. It will be fixed for the final release.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at python.net  Fri Sep  8 04:40:07 2000
From: gward at python.net (Greg Ward)
Date: Thu, 7 Sep 2000 22:40:07 -0400
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
Message-ID: <20000907224007.A959@beelzebub>

Hey all --

this is a bug I noticed in 1.5.2 ages ago, and never investigated
further.  I've just figured it out a little bit more; right now I can
only verify it in 1.5, as I don't have the right sort of 1.6 or 2.0
installation at home.  So if this has been fixed, I'll just shut up.

Bottom line: if you have an installation where prefix != exec-prefix,
and there is another Python installation on the system, then Python
screws up finding the landmark file (string.py in 1.5.2) and computes
the wrong prefix and exec-prefix.

Here's the scenario: I have a Red Hat 6.2 installation with the
"official" Red Hat python in /usr/bin/python.  I have a local build
installed with prefix=/usr/local/python and
exec-prefix=/usr/local/python.i86-linux; /usr/local/bin/python is a
symlink to ../python.i86-linux/bin/python.  (This dates to my days of
trying to understand what gets installed where.  Now, of course, I could
tell you what Python installs where in my sleep with one hand tied
behind my back... ;-)

Witness:
  $ /usr/bin/python -c "import sys ; print sys.prefix"
  /usr
  $/usr/local/bin/python -c "import sys ; print sys.prefix"
  /usr

...even though /usr/local/bin/python's library is really in
/usr/local/python/lib/python1.5 and
/usr/local/python.i86-linux/lib/python1.5.

If I erase Red Hat's Python, then /usr/local/bin/python figures out its
prefix correctly.

Using "strace" sheds a little more light on things; here's what I get
after massaging the "strace" output a bit (grep for "string.py"; all
that shows up are 'stat()' calls, where only the last succeeds; I've
stripped out everything but the filename):

  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.pyc
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.pyc
  /usr/local/bin/../lib/python1.5/string.py
  /usr/local/bin/../lib/python1.5/string.pyc
  /usr/local/bin/lib/python1.5/string.py
  /usr/local/bin/lib/python1.5/string.pyc
  /usr/local/lib/python1.5/string.py
  /usr/local/lib/python1.5/string.pyc
  /usr/lib/python1.5/string.py                # success because of Red Hat's
                                              # Python installation

Well, of course.  Python doesn't know what its true prefix is until it
has found its landmark file, but it can't find its landmark until it
knows its true prefix.  Here's the "strace" output after erasing Red
Hat's Python RPM:

  $ strace /usr/local/bin/python -c 1 2>&1 | grep 'string\.py'
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/bin/lib/python1.5/string.pyc
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.py
  /usr/local/bin/../python.i86-linux/lib/python1.5/string.pyc
  /usr/local/bin/../lib/python1.5/string.py
  /usr/local/bin/../lib/python1.5/string.pyc
  /usr/local/bin/lib/python1.5/string.py
  /usr/local/bin/lib/python1.5/string.pyc
  /usr/local/lib/python1.5/string.py
  /usr/local/lib/python1.5/string.pyc
  /usr/lib/python1.5/string.py               # now fail since I removed 
  /usr/lib/python1.5/string.pyc              # Red Hat's RPM
  /usr/local/python/lib/python1.5/string.py

A-ha!  When the /usr installation is no longer there to fool it, Python
then looks in the right place.

So, has this bug been fixed in 1.6 or 2.0?  If not, where do I look?

        Greg

PS. what about hard-coding a prefix and exec-prefix in the binary, and
only searching for the landmark if the hard-coded values fail?  That
way, this complicated and expensive search is only done if the
installation has been relocated.

-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From jeremy at beopen.com  Fri Sep  8 05:13:09 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 7 Sep 2000 23:13:09 -0400 (EDT)
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
In-Reply-To: <20000907224007.A959@beelzebub>
References: <20000907224007.A959@beelzebub>
Message-ID: <14776.22853.316652.994320@bitdiddle.concentric.net>

>>>>> "GW" == Greg Ward <gward at python.net> writes:

  GW> PS. what about hard-coding a prefix and exec-prefix in the
  GW> binary, and only searching for the landmark if the hard-coded
  GW> values fail?  That way, this complicated and expensive search is
  GW> only done if the installation has been relocated.

I've tried not to understand much about the search process.  I know
that it is slow (relatively speaking) and that it can be avoided by
setting the PYTHONHOME environment variable.

Jeremy



From MarkH at ActiveState.com  Fri Sep  8 06:02:07 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 8 Sep 2000 15:02:07 +1100
Subject: [Python-Dev] win32all-133 for Python 1.6, and win32all-134 for Python 2.0
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEGJDIAA.MarkH@ActiveState.com>

FYI - I'm updating the starship pages, and will make an announcement to the
newsgroup soon.

But in the meantime, some advance notice:

* All new win32all builds will be released from
http://www.activestate.com/Products/ActivePython/win32all.html.  This is
good for me - ActiveState actually have paid systems guys :-)
win32all-133.exe for 1.6b1 and 1.6 final can be found there.

* win32all-134.exe for the Python 2.x betas is not yet referenced at that
page, but is at
www.activestate.com/download/ActivePython/windows/win32all/win32all-134.exe

If you have ActivePython, you do _not_ need win32all.

Please let me know if you have any problems, or any other questions
regarding this...

Thanks,

Mark.


_______________________________________________
win32-reg-users maillist  -  win32-reg-users at pythonpros.com
http://mailman.pythonpros.com/mailman/listinfo/win32-reg-users




From tim_one at email.msn.com  Fri Sep  8 09:45:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 8 Sep 2000 03:45:14 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000901c0191f$b48d65e0$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENFHEAA.tim_one@email.msn.com>

[Barry Scott]
> Please don't release new kits with identical names/versions as old kits.

It *is* the 2.0b1 release; the only difference is that two of the 2.0b1 Lib
sub-directories that got left out by mistake got included.  This is
repairing an error in the release process, not in the code.

> How do you expect anyone to tell if they have the fix or not?

If they have Lib\xml, they've got the repaired release.  Else they've got
the flawed one.  They can also tell from Python's startup line:

C:\Python20>python
Python 2.0b1 (#4, Sep  7 2000, 02:40:55) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>>

The "#4" and the timestamp say that's the repaired release.  The flawed
release has "#3" there and an earlier timestamp.  If someone is still
incompetent to tell the difference <wink>, they can look at the installer
file size.

> Finding and fixing bugs show you care about quality.
> Stealth releases negate the benefit.

'Twasn't meant to be a "stealth release":  that's *another* screwup!  The
webmaster  didn't get the explanation onto the download page yet, for
reasons beyond his control.  Fred Drake *did* manage to update the
installer, and that was the most important part.  The explanation will show
up ... beats me, ask CNRI <wink>.





From mal at lemburg.com  Fri Sep  8 13:47:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 13:47:08 +0200
Subject: [Python-Dev] python -U fails
References: <200009071646.SAA07004@python.inrialpes.fr>
Message-ID: <39B8D1BC.9B46E005@lemburg.com>

Vladimir Marangozov wrote:
> 
> Seen on c.l.py (import site fails due to eval on an unicode string):
> 
> ~/python/Python-2.0b1>python -U
> 'import site' failed; use -v for traceback
> Python 2.0b1 (#2, Sep  7 2000, 12:59:53)
> [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> eval (u"1+2")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> TypeError: eval() argument 1 must be string or code object
> >>>
> 
> The offending eval is in os.py
> 
> Traceback (most recent call last):
>   File "./Lib/site.py", line 60, in ?
>     import sys, os
>   File "./Lib/os.py", line 331, in ?
>     if _exists("fork") and not _exists("spawnv") and _exists("execv"):
>   File "./Lib/os.py", line 325, in _exists
>     eval(name)

Note that many thing fail when Python is started with -U... that
switch was introduced to be able to get an idea of which parts
of the standard fail to work in a mixed string/Unicode environment.

In the above case, I guess the eval() could be replaced by some
other logic which does a try: except NameError: check.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  8 14:02:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 14:02:46 +0200
Subject: [Python-Dev] hasattr() and Unicode strings
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <14775.62572.442732.589738@cj42289-a.reston1.va.home.com>
Message-ID: <39B8D566.4011E433@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
> Andrew Kuchling writes:
>  > Is this a bug or a feature?  I'd say bug; the Unicode should be
>  > coerced using the default ASCII encoding, and an exception raised if
>  > that isn't possible.
> 
>   I agree.
>   Marc-Andre, what do you think?

Sounds ok to me.

The only question is where to apply the patch:
1. in hasattr()
2. in PyObject_GetAttr()

I'd opt for using the second solution (it should allow string
and Unicode objects as attribute name). hasattr() would then
have to be changed to use the "O" parser marker.

What do you think ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  8 14:09:03 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 14:09:03 +0200
Subject: [Python-Dev] hasattr() and Unicode strings
References: <E13X7rk-0005N9-00@kronos.cnri.reston.va.us> <200009072132.QAA10047@cj20424-a.reston1.va.home.com>
Message-ID: <39B8D6DF.AA11746D@lemburg.com>

Guido van Rossum wrote:
> 
> > hasattr(), getattr(), and doubtless other built-in functions
> > don't accept Unicode strings at all:
> >
> > >>> import sys
> > >>> hasattr(sys, u'abc')
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > TypeError: hasattr, argument 2: expected string, unicode found
> >
> > Is this a bug or a feature?  I'd say bug; the Unicode should be
> > coerced using the default ASCII encoding, and an exception raised if
> > that isn't possible.
> 
> Agreed.
> 
> There are probably a bunch of things that need to be changed before
> thois works though; getattr() c.s. require a string, then call
> PyObject_GetAttr() which also checks for a string unless the object
> supports tp_getattro -- but that's only true for classes and
> instances.
> 
> Also, should we convert the string to 8-bit, or should we allow
> Unicode attribute names?

Attribute names will have to be 8-bit strings (at least in 2.0).

The reason here is that attributes are normally Python identifiers
which are plain ASCII and stored as 8-bit strings in the namespace
dictionaries, i.e. there's no way to add Unicode attribute names
other than by assigning directly to __dict__.

Note that keyword lookups already automatically convert Unicode
lookup strings to 8-bit using the default encoding. The same should
happen here, IMHO.
 
> It seems there's no easy fix -- better address this after 2.0 is
> released.

Why wait for 2.1 ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 14:24:49 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 14:24:49 +0200 (CEST)
Subject: [Python-Dev] Re: DEBUG_SAVEALL feature for gc not in 2.0b1?
In-Reply-To: <14769.15402.630192.4454@beluga.mojam.com> from "Skip Montanaro" at Sep 02, 2000 12:43:06 PM
Message-ID: <200009081224.OAA08999@python.inrialpes.fr>

Skip Montanaro wrote:
> 
>     Vlad> Skip Montanaro wrote:
>     >> 
>     >> If I read my (patched) version of gcmodule.c correctly, with the
>     >> gc.DEBUG_SAVEALL bit set, gc.garbage *does* acquire all garbage, not
>     >> just the stuff with __del__ methods.
> 
>     Vlad> Yes. And you don't know which objects are collectable and which
>     Vlad> ones are not by this collector. That is, SAVEALL transforms the
>     Vlad> collector in a cycle detector. 
> 
> Which is precisely what I want.

All right! Since I haven't seen any votes, here's a +1. I'm willing
to handle Neil's patch at SF and let it in after some minor cleanup
that we'll discuss on the patch manager.

Any objections or other opinions on this?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at mems-exchange.org  Fri Sep  8 14:59:30 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 8 Sep 2000 08:59:30 -0400
Subject: [Python-Dev] Setup script for Tools/compiler (etc.)
Message-ID: <20000908085930.A15918@ludwig.cnri.reston.va.us>

Jeremy --

it seems to me that there ought to be a setup script in Tools/compiler;
it may not be part of the standard library, but at least it ought to
support the standard installation scheme.

So here it is:

  #!/usr/bin/env python

  from distutils.core import setup

  setup(name = "compiler",
        version = "?",
        author = "Jeremy Hylton",
        author_email = "jeremy at beopen.com",
        packages = ["compiler"])

Do you want to check it in or shall I?  ;-)

Also -- and this is the reason I cc'd python-dev -- there are probably
other useful hacks in Tools that should have setup scripts.  I'm
thinking most prominently of IDLE; as near as I can tell, the only way
to install IDLE is to manually copy Tools/idle/*.py to
<prefix>/lib/python{1.6,2.0}/site-packages/idle and then write a little
shell script to launch it for you, eg:

  #!/bin/sh
  # GPW 2000/07/10 ("strongly inspired" by Red Hat's IDLE script ;-)
  exec /depot/plat/packages/python-2.0b1/bin/python \
    /depot/plat/packages/python-2.0b1/lib/python2.0/site-packages/idle/idle.py $*

This is, of course, completely BOGUS!  Users should not have to write
shell scripts just to install and run IDLE in a sensible way.  I would
be happy to write a setup script that makes it easy to install
Tools/idle as a "third-party" module distribution, complete with a
launch script, if there's interest.  Oh hell, maybe I'll do it
anyways... just howl if you don't think I should check it in.

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 15:47:08 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 15:47:08 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
Message-ID: <200009081347.PAA13686@python.inrialpes.fr>

Seems like people are very surprised to see "print >> None" defaulting
to "print >> sys.stderr". I must confess that now that I'm looking at
it and after reading the PEP, this change lacks some argumentation.

In Python, this form surely looks & feels like the Unix cat /dev/null,
that is, since None doesn't have a 'write' method, the print statement
is expected to either raise an exception or be specialized for None to mean
"the print statement has no effect". The deliberate choice of sys.stderr
is not obvious.

I understand that Guido wanted to say "print >> None, args == print args"
and simplify the script logic, but using None in this case seems like a
bad spelling <wink>.

I have certainly carefully avoided any debates on the issue as I don't
see myself using this feature any time soon, but when I see on c.l.py
reactions of surprise on weakly argumented/documented features and I
kind of feel the same way, I'd better ask for more arguments here myself.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at mems-exchange.org  Fri Sep  8 16:14:26 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 8 Sep 2000 10:14:26 -0400
Subject: [Python-Dev] Distutil-ized IDLE
In-Reply-To: <20000908085930.A15918@ludwig.cnri.reston.va.us>; from gward@mems-exchange.org on Fri, Sep 08, 2000 at 08:59:30AM -0400
References: <20000908085930.A15918@ludwig.cnri.reston.va.us>
Message-ID: <20000908101426.A16014@ludwig.cnri.reston.va.us>

On 08 September 2000, I said:
> I would be happy to write a setup script that makes it easy to install
> Tools/idle as a "third-party" module distribution, complete with a
> launch script, if there's interest.  Oh hell, maybe I'll do it
> anyways... just howl if you don't think I should check it in.

OK, as threatened, I've written a setup script for IDLE.  (Specifically,
the version in Tools/idle in the Python 1.6 and 2.0 source
distributions.)  This installs IDLE into a pacakge "idle", which means
that the imports in idle.py have to change.  Rather than change idle.py,
I wrote a new script just called "idle"; this would replace idle.py and
be installed in <prefix>/bin (on Unix -- I think scripts installed by
the Distutils go to <prefix>/Scripts on Windows, which was a largely
arbitrary choice).

Anyways, here's the setup script:

  #!/usr/bin/env python

  import os
  from distutils.core import setup
  from distutils.command.install_data import install_data

  class IDLE_install_data (install_data):
      def finalize_options (self):
          if self.install_dir is None:
              install_lib = self.get_finalized_command('install_lib')
              self.install_dir = os.path.join(install_lib.install_dir, "idle")

  setup(name = "IDLE",
        version = "0.6",
        author = "Guido van Rossum",
        author_email = "guido at python.org",
        cmdclass = {'install_data': IDLE_install_data},
        packages = ['idle'],
        package_dir = {'idle': ''},
        scripts = ['idle'],
        data_files = ['config.txt', 'config-unix.txt', 'config-win.txt'])

And the changes I suggest to make IDLE smoothly installable:
  * remove idle.py 
  * add this setup.py and idle (which is just idle.py with the imports
    changed)
  * add some instructions on how to install and run IDLE somewhere

I just checked the CVS repository for the IDLE fork, and don't see a
setup.py there either -- so presumably the forked IDLE could benefit
from this as well (hence the cc: idle-dev at python.org).

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From mal at lemburg.com  Fri Sep  8 16:30:37 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 16:30:37 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
Message-ID: <39B8F80D.FF9CBAA9@lemburg.com>

Vladimir Marangozov wrote:
> 
> Seems like people are very surprised to see "print >> None" defaulting
> to "print >> sys.stderr". I must confess that now that I'm looking at
> it and after reading the PEP, this change lacks some argumentation.

According to the PEP it defaults to sys.stdout with the effect of
working just like the plain old "print" statement.

> In Python, this form surely looks & feels like the Unix cat /dev/null,
> that is, since None doesn't have a 'write' method, the print statement
> is expected to either raise an exception or be specialized for None to mean
> "the print statement has no effect". The deliberate choice of sys.stderr
> is not obvious.
> 
> I understand that Guido wanted to say "print >> None, args == print args"
> and simplify the script logic, but using None in this case seems like a
> bad spelling <wink>.
> 
> I have certainly carefully avoided any debates on the issue as I don't
> see myself using this feature any time soon, but when I see on c.l.py
> reactions of surprise on weakly argumented/documented features and I
> kind of feel the same way, I'd better ask for more arguments here myself.

+1

I'd opt for raising an exception instead of magically using
sys.stdout just to avoid two lines of explicit defaulting to
sys.stdout (see the example in the PEP).

BTW, I noted that the PEP pages on SF are not up-to-date. The
PEP 214 doesn't have the comments which Guido added in support
of the proposal.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Fri Sep  8 16:49:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 8 Sep 2000 10:49:59 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39B8F80D.FF9CBAA9@lemburg.com>
References: <200009081347.PAA13686@python.inrialpes.fr>
	<39B8F80D.FF9CBAA9@lemburg.com>
Message-ID: <14776.64663.617863.830703@cj42289-a.reston1.va.home.com>

M.-A. Lemburg writes:
 > BTW, I noted that the PEP pages on SF are not up-to-date. The
 > PEP 214 doesn't have the comments which Guido added in support
 > of the proposal.

  I just pushed new copies up to SF using the CVS versions.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Fri Sep  8 17:00:46 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 11:00:46 -0400 (EDT)
Subject: [Python-Dev] Finding landmark when prefix != exec-prefix
References: <20000907224007.A959@beelzebub>
Message-ID: <14776.65310.93934.482038@anthem.concentric.net>

Greg,

The place to look for the search algorithm is in Modules/getpath.c.
There's an extensive comment at the top of the file outlining the
algorithm.

In fact $PREFIX and $EXEC_PREFIX are used, but only as fallbacks.

-Barry



From skip at mojam.com  Fri Sep  8 17:00:38 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 8 Sep 2000 10:00:38 -0500 (CDT)
Subject: [Python-Dev] Re: [Bug #113811] Python 2.0 beta 1 -- urllib.urlopen() fails
In-Reply-To: <003601c0194e$916012f0$74eb0b18@C322162A>
References: <14776.4972.263490.780783@beluga.mojam.com>
	<003601c0194e$916012f0$74eb0b18@C322162A>
Message-ID: <14776.65302.599381.987636@beluga.mojam.com>

    Bob> The one I used was http://dreamcast.ign.com/review_lists/a.html,
    Bob> but probably any would do since it's pretty ordinary, and the error
    Bob> occurs before making any contact with the destination.

    Bob> By the way, I forgot to mention that I'm running under Windows 2000.

Bob,

Thanks for the input.  I asked for a URL because I thought it unlikely
something common would trigger a bug.  After all, urllib.urlopen is probably
one of the most frequently used Internet-related calls in Python.

I can't reproduce this on my Linux system:

    % ./python
    Python 2.0b1 (#6, Sep  7 2000, 21:03:08) 
    [GCC 2.95.3 19991030 (prerelease)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> import urllib
    >>> f = urllib.urlopen("http://dreamcast.ign.com/review_lists/a.html")
    >>> data = f.read()
    >>> len(data)

Perhaps one of the folks on python-dev that run Windows of some flavor can
reproduce the problem.  Can you give me a simple session transcript like the
above that fails for you?  I will see about adding a test to the urllib
regression test.

-- 
Skip Montanaro (skip at mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/



From bwarsaw at beopen.com  Fri Sep  8 17:27:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 11:27:24 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
Message-ID: <14777.1372.641371.803126@anthem.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:

    VM> Seems like people are very surprised to see "print >> None"
    VM> defaulting to "print >> sys.stderr". I must confess that now
    VM> that I'm looking at it and after reading the PEP, this change
    VM> lacks some argumentation.

sys.stdout, not stderr.

I was pretty solidly -0 on this extension, but Guido wanted it (and
even supplied the necessary patch!).  It tastes too magical to me,
for exactly the same reasons you describe.

I hadn't thought of the None == /dev/null equivalence, but that's a
better idea, IMO.  In fact, perhaps the printing could be optimized
away when None is used (although you'd lose any side-effects there
might be).  This would actually make extended print more useful
because if you used

    print >> logfile

everywhere, you'd only need to start passing in logfile=None to
disable printing.  OTOH, it's not to hard to use

    class Devnull:
        def write(self, msg): pass
	

logfile=Devnull()

We'll have to wait until after the weekend for Guido's pronouncement.

-Barry





From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 18:23:13 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST)
Subject: [Python-Dev] 2.0 Optimization & speed
Message-ID: <200009081623.SAA14090@python.inrialpes.fr>

Continuing my impressions on the user's feedback to date: Donn Cave
& MAL are at least two voices I've heard about an overall slowdown
of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
this slowdown comes from and I believe that we have only vague guesses
about the possible causes: unicode database, more opcodes in ceval, etc.

I wonder whether we are in a position to try improving Python's
performance with some `wise quickies' in a next beta. But this raises
a more fundamental question on what is our margin for manoeuvres at this
point. This in turn implies that we need some classification of the
proposed optimizations to date.

Perhaps it would be good to create a dedicated Web page for this, but
in the meantime, let's try to build a list/table of the ideas that have
been proposed so far. This would be useful anyway, and the list would be
filled as time goes.

Trying to push this initiative one step further, here's a very rough start
on the top of my head:

Category 1: Algorithmic Changes

These are the most promising, since they don't relate to pure technicalities
but imply potential improvements with some evidence.
I'd put in this category:

- the dynamic dictionary/string specialization by Fred Drake
  (this is already in). Can this be applied in other areas? If so, where?

- the Python-specific mallocs. Actually, I'm pretty sure that a lot of
  `overhead' is due to the standard mallocs which happen to be expensive
  for Python in both space and time. Python is very malloc-intensive.
  The only reason I've postponed my obmalloc patch is that I still haven't
  provided an interface which allows evaluating it's impact on the
  mem size consumption. It gives noticeable speedup on all machines, so
  it accounts as a good candidate w.r.t. performance.

- ??? (maybe some parts of MAL's optimizations could go here)

Category 2: Technical / Code optimizations

This category includes all (more or less) controversial proposals, like

- my latest lookdict optimizations (a typical controversial `quickie')

- opcode folding & reordering. Actually, I'm unclear on why Guido
  postponed the reordering idea; it has received positive feedback
  and all theoretical reasoning and practical experiments showed that
  this "could" help, although without any guarantees. Nobody reported
  slowdowns, though. This is typically a change without real dangers.

- kill the async / pending calls logic. (Tim, what happened with this
  proposal?)

- compact the unicodedata database, which is expected to reduce the
  mem footprint, maybe improve startup time, etc. (ongoing)

- proposal about optimizing the "file hits" on startup.

- others?

If there are potential `wise quickies', meybe it's good to refresh
them now and experiment a bit more before the final release?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From mwh21 at cam.ac.uk  Fri Sep  8 18:39:58 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Fri, 8 Sep 2000 17:39:58 +0100 (BST)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <Pine.LNX.4.10.10009081736070.29215-100000@localhost.localdomain>

It's 5:30 and I'm still at work (eek!) so for now I'll just say:

On Fri, 8 Sep 2000, Vladimir Marangozov wrote:
[...]
> Category 2: Technical / Code optimizations
[...]
> - others?

Killing off SET_LINENO?

Cheers,
M.





From mal at lemburg.com  Fri Sep  8 18:49:58 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 18:49:58 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <39B918B6.659C6C88@lemburg.com>

Vladimir Marangozov wrote:
> 
> Continuing my impressions on the user's feedback to date: Donn Cave
> & MAL are at least two voices I've heard about an overall slowdown
> of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
> this slowdown comes from and I believe that we have only vague guesses
> about the possible causes: unicode database, more opcodes in ceval, etc.
> 
> I wonder whether we are in a position to try improving Python's
> performance with some `wise quickies' in a next beta.

I don't think it's worth trying to optimize anything in the
beta series: optimizations need to be well tested and therefore
should go into 2.1.

Perhaps we ought to make these optimizations the big new issue
for 2.1...

It would fit well with the move to a more pluggable interpreter
design.

> But this raises
> a more fundamental question on what is our margin for manoeuvres at this
> point. This in turn implies that we need some classification of the
> proposed optimizations to date.
> 
> Perhaps it would be good to create a dedicated Web page for this, but
> in the meantime, let's try to build a list/table of the ideas that have
> been proposed so far. This would be useful anyway, and the list would be
> filled as time goes.
> 
> Trying to push this initiative one step further, here's a very rough start
> on the top of my head:
> 
> Category 1: Algorithmic Changes
> 
> These are the most promising, since they don't relate to pure technicalities
> but imply potential improvements with some evidence.
> I'd put in this category:
> 
> - the dynamic dictionary/string specialization by Fred Drake
>   (this is already in). Can this be applied in other areas? If so, where?
>
> - the Python-specific mallocs. Actually, I'm pretty sure that a lot of
>   `overhead' is due to the standard mallocs which happen to be expensive
>   for Python in both space and time. Python is very malloc-intensive.
>   The only reason I've postponed my obmalloc patch is that I still haven't
>   provided an interface which allows evaluating it's impact on the
>   mem size consumption. It gives noticeable speedup on all machines, so
>   it accounts as a good candidate w.r.t. performance.
> 
> - ??? (maybe some parts of MAL's optimizations could go here)

One addition would be my small dict patch: the dictionary
tables for small dictionaries are added to the dictionary
object itself rather than allocating a separate buffer.
This is useful for small dictionaries (8-16 entries) and
causes a speedup due to the fact that most instance dictionaries
are in fact of that size.
 
> Category 2: Technical / Code optimizations
> 
> This category includes all (more or less) controversial proposals, like
> 
> - my latest lookdict optimizations (a typical controversial `quickie')
> 
> - opcode folding & reordering. Actually, I'm unclear on why Guido
>   postponed the reordering idea; it has received positive feedback
>   and all theoretical reasoning and practical experiments showed that
>   this "could" help, although without any guarantees. Nobody reported
>   slowdowns, though. This is typically a change without real dangers.

Rather than folding opcodes, I'd suggest breaking the huge
switch in two or three parts so that the most commonly used
opcodes fit nicely into the CPU cache.
 
> - kill the async / pending calls logic. (Tim, what happened with this
>   proposal?)

In my patched version of 1.5 I have moved this logic into the
second part of the ceval switch: as a result, signals are only
queried if a less common opcode is used.

> - compact the unicodedata database, which is expected to reduce the
>   mem footprint, maybe improve startup time, etc. (ongoing)

This was postponed to 2.1. It doesn't have any impact on
performance... not even on memory footprint since it is only
loaded on demand by the OS.
 
> - proposal about optimizing the "file hits" on startup.

A major startup speedup can be had by using a smarter
file lookup mechanism. 

Another possibility is freeze()ing the whole standard lib 
and putting it into a shared module. I'm not sure how well
this works with packages, but it did work very well for
1.5.2 (see the mxCGIPython project).
 
> - others?
> 
> If there are potential `wise quickies', meybe it's good to refresh
> them now and experiment a bit more before the final release?

No, let's leave this for 2.1.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Fri Sep  8 19:18:01 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 12:18:01 -0500 (CDT)
Subject: [Python-Dev] obsolete urlopen.py in CVS
Message-ID: <14777.8009.543626.966203@buffalo.fnal.gov>

Another obsolete file has magically appeared in my local CVS
workspace.  I am assuming that I should continue to report these sorts
of problems. If not, just tell me and I'll stop with these annoying
messages.  Is there a mail address for the CVS admin so I don't have
to bug the whole list?

Lib$ cvs status urlopen.py                                             
===================================================================
File: urlopen.py        Status: Up-to-date

   Working revision:    1.7
   Repository revision: 1.7     /cvsroot/python/python/dist/src/Lib/Attic/urlopen.py,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)




From effbot at telia.com  Fri Sep  8 19:38:07 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 8 Sep 2000 19:38:07 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr> <39B918B6.659C6C88@lemburg.com>
Message-ID: <00e401c019bb$904084a0$766940d5@hagrid>

mal wrote:
> > - compact the unicodedata database, which is expected to reduce the
> >   mem footprint, maybe improve startup time, etc. (ongoing)
> 
> This was postponed to 2.1. It doesn't have any impact on
> performance...

sure has, for anyone distributing python applications.  we're
talking more than 1 meg of extra binary bloat (over 2.5 megs
of extra source code...)

the 2.0 release PEP says:

    Compression of Unicode database - Fredrik Lundh
      SF Patch 100899
      At least for 2.0b1.  May be included in 2.0 as a bug fix.

(the API is frozen, and we have an extensive test suite...)

</F>




From fdrake at beopen.com  Fri Sep  8 19:29:54 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 8 Sep 2000 13:29:54 -0400 (EDT)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <00e401c019bb$904084a0$766940d5@hagrid>
References: <200009081623.SAA14090@python.inrialpes.fr>
	<39B918B6.659C6C88@lemburg.com>
	<00e401c019bb$904084a0$766940d5@hagrid>
Message-ID: <14777.8722.902222.452584@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > (the API is frozen, and we have an extensive test suite...)

  What are the reasons for the hold-up?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Fri Sep  8 19:41:59 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 8 Sep 2000 19:41:59 +0200
Subject: [Python-Dev] obsolete urlopen.py in CVS
References: <14777.8009.543626.966203@buffalo.fnal.gov>
Message-ID: <00ea01c019bc$1929f4e0$766940d5@hagrid>

Charles G Waldman wrote:
> Another obsolete file has magically appeared in my local CVS
> workspace.  I am assuming that I should continue to report these sorts
> of problems. If not, just tell me and I'll stop with these annoying
> messages.

what exactly are you doing to check things out?

note that CVS may check things out from the Attic under
certain circumstances, like if you do "cvs update -D".  see
the CVS FAQ for more info.

</F>




From mal at lemburg.com  Fri Sep  8 19:43:40 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 19:43:40 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009081623.SAA14090@python.inrialpes.fr> <39B918B6.659C6C88@lemburg.com> <00e401c019bb$904084a0$766940d5@hagrid>
Message-ID: <39B9254C.5209AC81@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > > - compact the unicodedata database, which is expected to reduce the
> > >   mem footprint, maybe improve startup time, etc. (ongoing)
> >
> > This was postponed to 2.1. It doesn't have any impact on
> > performance...
> 
> sure has, for anyone distributing python applications.  we're
> talking more than 1 meg of extra binary bloat (over 2.5 megs
> of extra source code...)

Yes, but not there's no impact on speed and that's what Valdimir
was referring to.
 
> the 2.0 release PEP says:
> 
>     Compression of Unicode database - Fredrik Lundh
>       SF Patch 100899
>       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> 
> (the API is frozen, and we have an extensive test suite...)

Note that I want to redesign the Unicode database and ctype
access for 2.1: all databases should be accessible through
the unicodedatabase module which will be rewritten as Python
module. 

The real data will then go into auxilliary C modules
as static C data which are managed by the Python module
and loaded on demand. This means that what now is unicodedatabase
will then move into some _unicodedb module.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Fri Sep  8 20:13:48 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 13:13:48 -0500 (CDT)
Subject: [Python-Dev] obsolete urlopen.py in CVS
In-Reply-To: <00ea01c019bc$1929f4e0$766940d5@hagrid>
References: <14777.8009.543626.966203@buffalo.fnal.gov>
	<00ea01c019bc$1929f4e0$766940d5@hagrid>
Message-ID: <14777.11356.106477.440474@buffalo.fnal.gov>

Fredrik Lundh writes:

 > what exactly are you doing to check things out?

cvs update -dAP

 > note that CVS may check things out from the Attic under
 > certain circumstances, like if you do "cvs update -D".  see
 > the CVS FAQ for more info.

No, I am not using the '-D' flag.






From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 21:27:06 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 21:27:06 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <14777.1372.641371.803126@anthem.concentric.net> from "Barry A. Warsaw" at Sep 08, 2000 11:27:24 AM
Message-ID: <200009081927.VAA14502@python.inrialpes.fr>

Barry A. Warsaw wrote:
> 
> 
> >>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:
> 
>     VM> Seems like people are very surprised to see "print >> None"
>     VM> defaulting to "print >> sys.stderr". I must confess that now
>     VM> that I'm looking at it and after reading the PEP, this change
>     VM> lacks some argumentation.
> 
> sys.stdout, not stderr.

typo

> 
> I was pretty solidly -0 on this extension, but Guido wanted it (and
> even supplied the necessary patch!).  It tastes too magical to me,
> for exactly the same reasons you describe.
> 
> I hadn't thought of the None == /dev/null equivalence, but that's a
> better idea, IMO.  In fact, perhaps the printing could be optimized
> away when None is used (although you'd lose any side-effects there
> might be).  This would actually make extended print more useful
> because if you used
> 
>     print >> logfile
> 
> everywhere, you'd only need to start passing in logfile=None to
> disable printing.  OTOH, it's not to hard to use
> 
>     class Devnull:
>         def write(self, msg): pass
> 	
> 
> logfile=Devnull()

In no way different than using a function, say output() or an instance
of a Stream class that can poke at will on file objects, instead of
extended print <0.5 wink>. This is a matter of personal taste, after all.

> 
> We'll have to wait until after the weekend for Guido's pronouncement.
> 

Sure. Note that I don't feel like I'll loose my sleep if this doesn't
change. However, it looks like the None business goes a bit too far here.
In the past, Guido used to label such things "creeping featurism", but
times change... :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From bwarsaw at beopen.com  Fri Sep  8 21:36:01 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 8 Sep 2000 15:36:01 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <14777.1372.641371.803126@anthem.concentric.net>
	<200009081927.VAA14502@python.inrialpes.fr>
Message-ID: <14777.16289.587240.778501@anthem.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:

    VM> Sure. Note that I don't feel like I'll loose my sleep if this
    VM> doesn't change. However, it looks like the None business goes
    VM> a bit too far here.  In the past, Guido used to label such
    VM> things "creeping featurism", but times change... :-)

Agreed.



From mal at lemburg.com  Fri Sep  8 22:26:45 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 22:26:45 +0200
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
References: <200009081702.LAA08275@localhost.localdomain>
		<Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
Message-ID: <39B94B85.BFD16019@lemburg.com>

As you may have heard, there are problems with the stock
XML support and the PyXML project due to both trying to
use the xml package namespace (see the xml-sig for details).

To provide more flexibility to the third-party tools in such
a situation, I think it would be worthwhile moving the
site-packages/ entry in sys.path in front of the lib/python2.0/
entry.

That way a third party tool can override the standard lib's
package or module or take appropriate action to reintegrate
the standard lib's package namespace into an extended one.

What do you think ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Fri Sep  8 22:48:23 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 8 Sep 2000 22:48:23 +0200 (CEST)
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <39B9254C.5209AC81@lemburg.com> from "M.-A. Lemburg" at Sep 08, 2000 07:43:40 PM
Message-ID: <200009082048.WAA14671@python.inrialpes.fr>

M.-A. Lemburg wrote:
> 
> Fredrik Lundh wrote:
> > 
> > mal wrote:
> > > > - compact the unicodedata database, which is expected to reduce the
> > > >   mem footprint, maybe improve startup time, etc. (ongoing)
> > >
> > > This was postponed to 2.1. It doesn't have any impact on
> > > performance...
> > 
> > sure has, for anyone distributing python applications.  we're
> > talking more than 1 meg of extra binary bloat (over 2.5 megs
> > of extra source code...)
> 
> Yes, but not there's no impact on speed and that's what Valdimir
> was referring to.

Hey Marc-Andre, what encoding are you using for printing my name? <wink>

>  
> > the 2.0 release PEP says:
> > 
> >     Compression of Unicode database - Fredrik Lundh
> >       SF Patch 100899
> >       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> > 
> > (the API is frozen, and we have an extensive test suite...)
> 
> Note that I want to redesign the Unicode database and ctype
> access for 2.1: all databases should be accessible through
> the unicodedatabase module which will be rewritten as Python
> module. 
> 
> The real data will then go into auxilliary C modules
> as static C data which are managed by the Python module
> and loaded on demand. This means that what now is unicodedatabase
> will then move into some _unicodedb module.

Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.
My argument doesn't hold, but Fredrik has a point and I don't see how
your future changes would invalidate these efforts. If the size of
the distribution can be reduced, it should be reduced! Did you know
that telecom companies measure the quality of their technologies on
a per bit basis? <0.1 wink> Every bit costs money, and that's why
Van Jacobson packet-header compression has been invented and is
massively used. Whole armies of researchers are currently trying to
compensate the irresponsible bloatware that people of the higher
layers are imposing on them <wink>. Careful!

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From jeremy at beopen.com  Fri Sep  8 22:54:33 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 8 Sep 2000 16:54:33 -0400 (EDT)
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B94B85.BFD16019@lemburg.com>
References: <200009081702.LAA08275@localhost.localdomain>
	<Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com>
	<14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
	<39B94B85.BFD16019@lemburg.com>
Message-ID: <14777.21001.363279.137646@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:

  MAL> To provide more flexibility to the third-party tools in such a
  MAL> situation, I think it would be worthwhile moving the
  MAL> site-packages/ entry in sys.path in front of the lib/python2.0/
  MAL> entry.

  MAL> That way a third party tool can override the standard lib's
  MAL> package or module or take appropriate action to reintegrate the
  MAL> standard lib's package namespace into an extended one.

  MAL> What do you think ?

I think it is a bad idea to encourage third party tools to override
the standard library.  We call it the standard library for a reason!

It invites confusion and headaches to read a bit of code that says
"import pickle" and have its meaning depend on what oddball packages
someone has installed on the system.  Good bye, portability!

If you want to use a third-party package that provides the same
interface as a standard library, it seems much clearn to say so
explicitly.

I would agree that there is an interesting design problem here.  I
think the problem is support interfaces, where an interface allows me
to write code that can run with any implementation of that interface.
I don't think hacking sys.path is a good solution.

Jeremy



From akuchlin at mems-exchange.org  Fri Sep  8 22:52:02 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 8 Sep 2000 16:52:02 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <14777.21001.363279.137646@bitdiddle.concentric.net>; from jeremy@beopen.com on Fri, Sep 08, 2000 at 04:54:33PM -0400
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net>
Message-ID: <20000908165202.F12994@kronos.cnri.reston.va.us>

On Fri, Sep 08, 2000 at 04:54:33PM -0400, Jeremy Hylton wrote:
>It invites confusion and headaches to read a bit of code that says
>"import pickle" and have its meaning depend on what oddball packages
>someone has installed on the system.  Good bye, portability!

Amen.  But then, I was against adding xml/ in the first place...

--amk



From mal at lemburg.com  Fri Sep  8 22:53:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 22:53:32 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009082048.WAA14671@python.inrialpes.fr>
Message-ID: <39B951CC.3C0AE801@lemburg.com>

Vladimir Marangozov wrote:
> 
> M.-A. Lemburg wrote:
> >
> > Fredrik Lundh wrote:
> > >
> > > mal wrote:
> > > > > - compact the unicodedata database, which is expected to reduce the
> > > > >   mem footprint, maybe improve startup time, etc. (ongoing)
> > > >
> > > > This was postponed to 2.1. It doesn't have any impact on
> > > > performance...
> > >
> > > sure has, for anyone distributing python applications.  we're
> > > talking more than 1 meg of extra binary bloat (over 2.5 megs
> > > of extra source code...)
> >
> > Yes, but not there's no impact on speed and that's what Valdimir
> > was referring to.
> 
> Hey Marc-Andre, what encoding are you using for printing my name? <wink>

Yeah, I know... the codec swaps character on an irregular basis
-- gotta fix that ;-)
 
> >
> > > the 2.0 release PEP says:
> > >
> > >     Compression of Unicode database - Fredrik Lundh
> > >       SF Patch 100899
> > >       At least for 2.0b1.  May be included in 2.0 as a bug fix.
> > >
> > > (the API is frozen, and we have an extensive test suite...)
> >
> > Note that I want to redesign the Unicode database and ctype
> > access for 2.1: all databases should be accessible through
> > the unicodedatabase module which will be rewritten as Python
> > module.
> >
> > The real data will then go into auxilliary C modules
> > as static C data which are managed by the Python module
> > and loaded on demand. This means that what now is unicodedatabase
> > will then move into some _unicodedb module.
> 
> Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.

Oh, I didn't try to reduce Fredrik's efforts at all. To the
contrary: I'm still looking forward to his melted down version
of the database and the ctype tables.

The point I wanted to make was that all this can well be
done for 2.1. There are many more urgent things which need
to get settled in the beta cycle. Size optimizations are
not necessarily one of them, IMHO.

> My argument doesn't hold, but Fredrik has a point and I don't see how
> your future changes would invalidate these efforts. If the size of
> the distribution can be reduced, it should be reduced! Did you know
> that telecom companies measure the quality of their technologies on
> a per bit basis? <0.1 wink> Every bit costs money, and that's why
> Van Jacobson packet-header compression has been invented and is
> massively used. Whole armies of researchers are currently trying to
> compensate the irresponsible bloatware that people of the higher
> layers are imposing on them <wink>. Careful!

True, but why the hurry ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Sep  8 22:58:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 8 Sep 2000 16:58:31 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <20000908165202.F12994@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEACHFAA.tim_one@email.msn.com>

[Andrew Kuchling]
> Amen.  But then, I was against adding xml/ in the first place...

So *you're* the guy who sabotaged the Windows installer!  Should have
guessed -- you almost got away with it, too <wink>.





From mal at lemburg.com  Fri Sep  8 23:31:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 23:31:06 +0200
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
References: <200009081702.LAA08275@localhost.localdomain>
		<Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com>
		<14777.18321.457342.757978@cj42289-a.reston1.va.home.com>
		<39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net>
Message-ID: <39B95A9A.D5A01F53@lemburg.com>

Jeremy Hylton wrote:
> 
> >>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:
> 
>   MAL> To provide more flexibility to the third-party tools in such a
>   MAL> situation, I think it would be worthwhile moving the
>   MAL> site-packages/ entry in sys.path in front of the lib/python2.0/
>   MAL> entry.
> 
>   MAL> That way a third party tool can override the standard lib's
>   MAL> package or module or take appropriate action to reintegrate the
>   MAL> standard lib's package namespace into an extended one.
> 
>   MAL> What do you think ?
> 
> I think it is a bad idea to encourage third party tools to override
> the standard library.  We call it the standard library for a reason!
> 
> It invites confusion and headaches to read a bit of code that says
> "import pickle" and have its meaning depend on what oddball packages
> someone has installed on the system.  Good bye, portability!

Ok... so we'll need a more flexible solution.
 
> If you want to use a third-party package that provides the same
> interface as a standard library, it seems much clearn to say so
> explicitly.
> 
> I would agree that there is an interesting design problem here.  I
> think the problem is support interfaces, where an interface allows me
> to write code that can run with any implementation of that interface.
> I don't think hacking sys.path is a good solution.

No, the problem is different: there is currently on way to
automatically add subpackages to an existing package which is
not aware of these new subpackages, i.e. say you have a
package xml in the standard lib and somebody wants to install
a new subpackage wml.

The only way to do this is by putting it into the xml
package directory (bad!) or by telling the user to
run 

	import xml_wml

first which then does the

	import xml, wml
	xml.wml = wml

to complete the installation... there has to be a more elegant
way.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep  8 23:48:18 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 08 Sep 2000 23:48:18 +0200
Subject: [Python-Dev] PyObject_SetAttr/GetAttr() and non-string attribute names
Message-ID: <39B95EA2.7D98AA4C@lemburg.com>

While hacking along on a patch to let set|get|hasattr() accept
Unicode attribute names, I found that all current tp_getattro
and tp_setattro implementations (classes, instances, methods) expect
to find string objects as argument and don't even check for this.

Is this documented somewhere ? Should we make the existing
implementations aware of other objects as well ? Should we
fix the de-facto definition to string attribute names ?

My current solution does the latter. It's available as patch
on SF.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Sat Sep  9 00:55:01 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sat, 09 Sep 2000 00:55:01 +0200
Subject: [Python-Dev] Need some hands to debug MacPython installer
Message-ID: <20000908225506.92145D71FF@oratrix.oratrix.nl>

Folks,
I need some people to test the MacPython 2.0b1 installer. It is almost 
complete, only things like the readme file and some of the
documentation (on building and such) remains to be done. At least: as
far as I know. If someone (or someones) could try
ftp://ftp.cwi.nl/pub/jack/python/mac/PythonMac20preb1Installer.bin 
and tell me whether it works that would be much appreciated.
One thing to note is that if you've been building 2.0b1 MacPythons
from the CVS repository you'll have to remove your preference file
first (no such problem with older prefs files).

All feedback is welcome, of course, but I'm especially interested in
hearing which things I've forgotten (if people could check that
expected new modules and such are indeed there), and which bits of the 
documentation (in Mac:Demo) needs massaging. Oh, and bugs of course,
in the unlike event of there being any:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From gstein at lyra.org  Sat Sep  9 01:08:55 2000
From: gstein at lyra.org (Greg Stein)
Date: Fri, 8 Sep 2000 16:08:55 -0700
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B95A9A.D5A01F53@lemburg.com>; from mal@lemburg.com on Fri, Sep 08, 2000 at 11:31:06PM +0200
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com> <14777.21001.363279.137646@bitdiddle.concentric.net> <39B95A9A.D5A01F53@lemburg.com>
Message-ID: <20000908160855.B16566@lyra.org>

On Fri, Sep 08, 2000 at 11:31:06PM +0200, M.-A. Lemburg wrote:
> Jeremy Hylton wrote:
>...
> > If you want to use a third-party package that provides the same
> > interface as a standard library, it seems much clearn to say so
> > explicitly.
> > 
> > I would agree that there is an interesting design problem here.  I
> > think the problem is support interfaces, where an interface allows me
> > to write code that can run with any implementation of that interface.
> > I don't think hacking sys.path is a good solution.
> 
> No, the problem is different: there is currently on way to
> automatically add subpackages to an existing package which is
> not aware of these new subpackages, i.e. say you have a
> package xml in the standard lib and somebody wants to install
> a new subpackage wml.
> 
> The only way to do this is by putting it into the xml
> package directory (bad!) or by telling the user to
> run 
> 
> 	import xml_wml
> 
> first which then does the
> 
> 	import xml, wml
> 	xml.wml = wml
> 
> to complete the installation... there has to be a more elegant
> way.

There is. I proposed it a while back. Fred chose to use a different
mechanism, despite my recommendations to the contrary. *shrug*

The "current" mechanism require the PyXML package to completely override the
entire xml package in the Python distribution. This has certain, um,
problems... :-)

Another approach would be to use the __path__ symbol. I dislike that for
various import design reasons, but it would solve one of the issues Fred had
with my recommendation (e.g. needing to pre-import subpackages).

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From cgw at fnal.gov  Sat Sep  9 01:41:12 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 8 Sep 2000 18:41:12 -0500 (CDT)
Subject: [Python-Dev] Need some hands to debug MacPython installer
In-Reply-To: <20000908225506.92145D71FF@oratrix.oratrix.nl>
References: <20000908225506.92145D71FF@oratrix.oratrix.nl>
Message-ID: <14777.31000.382351.905418@buffalo.fnal.gov>

Jack Jansen writes:
 > Folks,
 > I need some people to test the MacPython 2.0b1 installer. 

I am not a Mac user but I saw your posting and my wife has a Mac so I
decided to give it a try. 

When I ran the installer, a lot of the text referred to "Python 1.6"
despite this being a 2.0 installer.

As the install completed I got a message:  

 The application "Configure Python" could not be opened because
 "OTInetClientLib -- OTInetGetSecondaryAddresses" could not be found

After that, if I try to bring up PythonIDE or PythonInterpreter by
clicking on the 16-ton icons, I get the same message about
OTInetGetSecondaryAddresses.  So I'm not able to run Python at all
right now on this Mac.



From sdm7g at virginia.edu  Sat Sep  9 02:23:45 2000
From: sdm7g at virginia.edu (Steven D. Majewski)
Date: Fri, 8 Sep 2000 20:23:45 -0400 (EDT)
Subject: [Python-Dev] Re: [Pythonmac-SIG] Need some hands to debug MacPython installer
In-Reply-To: <20000908225506.92145D71FF@oratrix.oratrix.nl>
Message-ID: <Pine.A32.3.90.1000908201956.15033A-100000@elvis.med.Virginia.EDU>

On Sat, 9 Sep 2000, Jack Jansen wrote:

> All feedback is welcome, of course, but I'm especially interested in
> hearing which things I've forgotten (if people could check that
> expected new modules and such are indeed there), and which bits of the 
> documentation (in Mac:Demo) needs massaging. Oh, and bugs of course,
> in the unlike event of there being any:-)

Install went smoothly. I haven't been following the latest developments,
so I'm not sure if this is SUPPOSED to work yet or not, but: 


Python 2.0b1 (#64, Sep  8 2000, 23:37:06)  [CW PPC w/GUSI2 w/THREADS]
Copyright (c) 2000 BeOpen.com.
All Rights Reserved.

 [...] 

>>> import thread
>>> import threading
Traceback (most recent call last):
  File "<input>", line 1, in ?
  File "Work:Python 2.0preb1:Lib:threading.py", line 538, in ?
    _MainThread()
  File "Work:Python 2.0preb1:Lib:threading.py", line 465, in __init__
    import atexit
ImportError: No module named atexit


(I'll try exercising some old scripts and see what else happens.)

---|  Steven D. Majewski   (804-982-0831)  <sdm7g at Virginia.EDU>  |---
---|  Department of Molecular Physiology and Biological Physics  |---
---|  University of Virginia             Health Sciences Center  |---
---|  P.O. Box 10011            Charlottesville, VA  22906-0011  |---
		"All operating systems want to be unix, 
		 All programming languages want to be lisp." 




From barry at scottb.demon.co.uk  Sat Sep  9 12:40:04 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sat, 9 Sep 2000 11:40:04 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENFHEAA.tim_one@email.msn.com>
Message-ID: <000001c01a4a$5066f280$060210ac@private>

I understand what you did and why. What I think is wrong is to use the
same name for the filename of the windows installer, source tar etc.

Each kit has a unique version but you have not reflected it in the
filenames. Only the filename is visible in a browser.

Why can't you add the 3 vs. 4 mark to the file name?

I cannot see the time stamp from a browser without downloading the file.

Won't you be getting bug reports against 2.0b1 and not know which one
the user has unless that realise to tell them that the #n is important?

You don't have any quick way to check that the webmaster on CNRI has change
the file to your newer version without downloading it.

I'm sure there are other tasks that user and developers will find made harder.

	BArry




From tim_one at email.msn.com  Sat Sep  9 13:18:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 9 Sep 2000 07:18:21 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000001c01a4a$5066f280$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBJHFAA.tim_one@email.msn.com>

Sorry, but I can't do anything more about this now.  The notice was supposed
to go up on the website at the same instant as the new installer, but the
people who can actually put the notice up *still* haven't done it.

In the future I'll certainly change the filename, should this ever happen
again (and, no, I can't change the filename from here either).

In the meantime, you don't want to hear this, but you're certainly free to
change the filenames on your end <wink -- but nobody yet has reported an
actual real-life confusion related to this, so while it may suck in theory,
practice appears much more forgiving>.

BTW, I didn't understand the complaint about "same name for the filename of
the windows installer, source tar etc.".  The *only* file I had replaced was

    BeOpen-Python-2.0b1.exe

I guess Fred replaced the PDF-format doc downloads too?  IIRC, those were
totally broken.  Don't think anything else was changed.

About bug reports, the only report of any possible relevance will be "I
tried to load the xml package under Windows 2.0b1, but got an
ImportError" -- and the cause of that will be obvious.  Also remember that
this is a beta release:  by definition, anyone using it at all a few weeks
from now is entirely on their own.

> -----Original Message-----
> From: Barry Scott [mailto:barry at scottb.demon.co.uk]
> Sent: Saturday, September 09, 2000 6:40 AM
> To: Tim Peters; python-dev at python.org
> Subject: RE: [Python-Dev] xml missing in Windows installer?
>
>
> I understand what you did and why. What I think is wrong is to use the
> same name for the filename of the windows installer, source tar etc.
>
> Each kit has a unique version but you have not reflected it in the
> filenames. Only the filename is visible in a browser.
>
> Why can't you add the 3 vs. 4 mark to the file name?
>
> I cannot see the time stamp from a browser without downloading the file.
>
> Won't you be getting bug reports against 2.0b1 and not know which one
> the user has unless that realise to tell them that the #n is important?
>
> You don't have any quick way to check that the webmaster on CNRI
> has change
> the file to your newer version without downloading it.
>
> I'm sure there are other tasks that user and developers will find
> made harder.
>
> 	BArry





From MarkH at ActiveState.com  Sat Sep  9 17:36:54 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sun, 10 Sep 2000 02:36:54 +1100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000001c01a4a$5066f280$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEJHDIAA.MarkH@ActiveState.com>

> I understand what you did and why. What I think is wrong is to use the
> same name for the filename of the windows installer, source tar etc.

Seeing as everyone (both of you <wink>) is hassling Tim, let me also stick
up for the actions.  This is a beta release, and as Tim said, is not any
sort of fix, other than what is installed.  The symptoms are obvious.
Sheesh - most people will hardly be aware xml support is _supposed_ to be
there :-)

I can see the other POV, but I don't think this is worth the administrative
overhead of a newly branded release.

Feeling-chatty, ly.

Mark.




From jack at oratrix.nl  Sun Sep 10 00:53:50 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sun, 10 Sep 2000 00:53:50 +0200
Subject: [Python-Dev] Re: [Pythonmac-SIG] Need some hands to debug MacPython installer
In-Reply-To: Message by "Steven D. Majewski" <sdm7g@virginia.edu> ,
	     Fri, 8 Sep 2000 20:23:45 -0400 (EDT) , <Pine.A32.3.90.1000908201956.15033A-100000@elvis.med.Virginia.EDU> 
Message-ID: <20000909225355.381DDD71FF@oratrix.oratrix.nl>

Oops, indeed some of the new modules were inadvertantly excluded. I'll 
create a new installer tomorrow (which should also contain the
documentation and such).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From barry at scottb.demon.co.uk  Sun Sep 10 23:38:34 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sun, 10 Sep 2000 22:38:34 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
Message-ID: <000201c01b6f$78594510$060210ac@private>

I just checked the announcement on www.pythonlabs.com that its not mentioned.

		Barry




From barry at scottb.demon.co.uk  Sun Sep 10 23:35:33 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sun, 10 Sep 2000 22:35:33 +0100
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEJHDIAA.MarkH@ActiveState.com>
Message-ID: <000101c01b6f$0cc94250$060210ac@private>

I guess you had not seen Tim's reply. I read his reply as understanding
the problem and saying that things will be done better for future kits.

I glad that you will have unique names for each of the beta releases.
This will allow beta testers to accurately report which beta kit they
see a problem in. That in turn will make fixing bug reports from the
beta simpler for the maintainers.

	BArry




From tim_one at email.msn.com  Mon Sep 11 00:21:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 10 Sep 2000 18:21:41 -0400
Subject: [Python-Dev] xml missing in Windows installer?
In-Reply-To: <000101c01b6f$0cc94250$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEEHHFAA.tim_one@email.msn.com>

[Barry Scott, presumably to Mark Hammond]
> I guess you had not seen Tim's reply.

Na, I think he did.  I bet he just thought you were being unbearably anal
about a non-problem in practice and wanted to annoy you back <wink>.

> I read his reply as understanding the problem and saying that things
> will be done better for future kits.

Oh yes.  We tried to take a shortcut, and it backfired.  I won't let that
happen again, and you were right to point it out (once <wink>).  BTW, the
notice *is* on the web site now, but depending on which browser you're
using, it may appear in a font so small it can't even been read!  The worst
part of moving to BeOpen.com so far was getting hooked up with professional
web designers who think HTML *should* be used for more than just giant
monolothic plain-text dumps <0.9 wink>; we can't change their elaborate
pages without extreme pain.

but-like-they-say-it's-the-sizzle-not-the-steak-ly y'rs  - tim





From tim_one at email.msn.com  Mon Sep 11 00:22:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 10 Sep 2000 18:22:06 -0400
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <000201c01b6f$78594510$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>

> I just checked the announcement on www.pythonlabs.com that its 
> not mentioned.

All bugs get reported on SourceForge.





From gward at mems-exchange.org  Mon Sep 11 15:53:53 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 11 Sep 2000 09:53:53 -0400
Subject: [Python-Dev] Letting ../site-packages override the standard lib ?!
In-Reply-To: <39B94B85.BFD16019@lemburg.com>; from mal@lemburg.com on Fri, Sep 08, 2000 at 10:26:45PM +0200
References: <200009081702.LAA08275@localhost.localdomain> <Pine.LNX.4.21.0009081236020.16116-100000@amati.techno.com> <14777.18321.457342.757978@cj42289-a.reston1.va.home.com> <39B94B85.BFD16019@lemburg.com>
Message-ID: <20000911095352.A24415@ludwig.cnri.reston.va.us>

On 08 September 2000, M.-A. Lemburg said:
> To provide more flexibility to the third-party tools in such
> a situation, I think it would be worthwhile moving the
> site-packages/ entry in sys.path in front of the lib/python2.0/
> entry.
> 
> That way a third party tool can override the standard lib's
> package or module or take appropriate action to reintegrate
> the standard lib's package namespace into an extended one.

+0 -- I actually *like* the ability to upgrade/override bits of the
standard library; this is occasionally essential, particularly when
there are modules (or even namespaces) in the standard library that have
lives (release cycles) of their own independent of Python and its
library.

There's already a note in the Distutils README.txt about how to upgrade
the Distutils under Python 1.6/2.0; it boils down to, "rename
lib/python/2.0/distutils and then install the new version".  Are PyXML,
asyncore, cPickle, etc. going to need similar qualifications in its
README?  Are RPMs (and other smart installers) of these modules going to
have to include code to do the renaming for you?

Ugh.  It's a proven fact that 73% of users don't read README files[1],
and I have a strong suspicion that the reliability of an RPM (or
whatever) decreases in proportion to the amount of
pre/post-install/uninstall code that it carries around with it.  I think
reordering sys.path would allow people to painlessly upgrade bits of the
standard library, and the benefits of this outweigh the "but then it's
not standard anymore!" objection.

        Greg

[1] And 65% of statistics are completely made up!



From cgw at fnal.gov  Mon Sep 11 20:55:09 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 11 Sep 2000 13:55:09 -0500 (CDT)
Subject: [Python-Dev] find_recursionlimit.py vs. libpthread vs. linux
Message-ID: <14781.10893.273438.446648@buffalo.fnal.gov>

It has been noted by people doing testing on Linux systems that

ulimit -s unlimited
python Misc/find_recursionlimit.py

will run for a *long* time if you have built Python without threads, but
will die after about 2400/2500 iterations if you have built with
threads, regardless of the "ulimit" setting.

I had thought this was evidence of a bug in Pthreads.  In fact
(although we still have other reasons to suspect Pthread bugs),
the behavior is easily explained.  The function "pthread_initialize"
in pthread.c contains this very lovely code:

  /* Play with the stack size limit to make sure that no stack ever grows
     beyond STACK_SIZE minus two pages (one page for the thread descriptor
     immediately beyond, and one page to act as a guard page). */
  getrlimit(RLIMIT_STACK, &limit);
  max_stack = STACK_SIZE - 2 * __getpagesize();
  if (limit.rlim_cur > max_stack) {
    limit.rlim_cur = max_stack;
    setrlimit(RLIMIT_STACK, &limit);
  }

In "internals.h", STACK_SIZE is #defined to (2 * 1024 * 1024)

So whenever you're using threads, you have an effective rlimit of 2MB
for stack, regardless of what you may *think* you have set via 
"ulimit -s"

One more mystery explained!






From gward at mems-exchange.org  Mon Sep 11 23:13:00 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 11 Sep 2000 17:13:00 -0400
Subject: [Python-Dev] Off-topic: common employee IP agreements?
Message-ID: <20000911171259.A26210@ludwig.cnri.reston.va.us>

Hi all --

sorry for the off-topic post.  I'd like to get a calibration reading
from other members of the open source community on an issue that's
causing some controversy around here: what sort of employee IP
agreements do other software/open source/Python/Linux/Internet-related
companies require their employees to sign?

I'm especially curious about companies that are prominent in the open
source world, like Red Hat, ActiveState, VA Linux, or SuSE; and big
companies that are involved in open source, like IBM or HP.  I'm also
interested in what universities, both around the world and in the U.S.,
impose on faculty, students, and staff.  If you have knowledge -- or
direct experience -- with any sort of employee IP agreement, though, I'm
curious to hear about it.  If possible, I'd like to get my hands on the
exact document your employer uses -- precedent is everything!  ;-)

Thanks -- and please reply to me directly; no need to pollute python-dev
with more off-topic posts.

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From guido at beopen.com  Tue Sep 12 01:10:31 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:10:31 -0500
Subject: [Python-Dev] obsolete urlopen.py in CVS
In-Reply-To: Your message of "Fri, 08 Sep 2000 13:13:48 EST."
             <14777.11356.106477.440474@buffalo.fnal.gov> 
References: <14777.8009.543626.966203@buffalo.fnal.gov> <00ea01c019bc$1929f4e0$766940d5@hagrid>  
            <14777.11356.106477.440474@buffalo.fnal.gov> 
Message-ID: <200009112310.SAA08374@cj20424-a.reston1.va.home.com>

> Fredrik Lundh writes:
> 
>  > what exactly are you doing to check things out?

[Charles]
> cvs update -dAP
> 
>  > note that CVS may check things out from the Attic under
>  > certain circumstances, like if you do "cvs update -D".  see
>  > the CVS FAQ for more info.
> 
> No, I am not using the '-D' flag.

I would drop the -A flag -- what's it used for?

I've done the same dance for urlopen.py and it seems to have
disappeared now.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 01:14:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:14:38 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Fri, 08 Sep 2000 15:47:08 +0200."
             <200009081347.PAA13686@python.inrialpes.fr> 
References: <200009081347.PAA13686@python.inrialpes.fr> 
Message-ID: <200009112314.SAA08409@cj20424-a.reston1.va.home.com>

[Vladimir]
> Seems like people are very surprised to see "print >> None" defaulting
> to "print >> sys.stderr". I must confess that now that I'm looking at
> it and after reading the PEP, this change lacks some argumentation.
> 
> In Python, this form surely looks & feels like the Unix cat /dev/null,
> that is, since None doesn't have a 'write' method, the print statement
> is expected to either raise an exception or be specialized for None to mean
> "the print statement has no effect". The deliberate choice of sys.stderr
> is not obvious.
> 
> I understand that Guido wanted to say "print >> None, args == print args"
> and simplify the script logic, but using None in this case seems like a
> bad spelling <wink>.
> 
> I have certainly carefully avoided any debates on the issue as I don't
> see myself using this feature any time soon, but when I see on c.l.py
> reactions of surprise on weakly argumented/documented features and I
> kind of feel the same way, I'd better ask for more arguments here myself.

(I read the followup and forgive you sys.stderr; didn't want to follow
up to the rest of the thread because it doesn't add much.)

After reading the little bit of discussion here, I still think
defaulting None to sys.stdout is a good idea.

Don't think of it as

  print >>None, args

Think of it as

  def func(file=None):
    print >>file, args

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Tue Sep 12 00:24:13 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 11 Sep 2000 18:24:13 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112314.SAA08409@cj20424-a.reston1.va.home.com>
References: <200009081347.PAA13686@python.inrialpes.fr>
	<200009112314.SAA08409@cj20424-a.reston1.va.home.com>
Message-ID: <14781.23437.165189.328323@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  GvR> Don't think of it as

  GvR>   print >>None, args

  GvR> Think of it as

  GvR>   def func(file=None):
  GvR>     print >>file, args

Huh?  Don't you mean think of it as:

def func(file=None):
    if file is None:
       import sys
       print >>sys.stdout, args
    else:
	print >>file, args

At least, I think that's why I find the use of None confusing.  I find
it hard to make a strong association between None and sys.stdout.  In
fact, when I was typing this message, I wrote it as sys.stderr and
only discovered my error upon re-reading the initial message.

Jeremy



From bwarsaw at beopen.com  Tue Sep 12 00:28:31 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 11 Sep 2000 18:28:31 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
References: <200009081347.PAA13686@python.inrialpes.fr>
	<200009112314.SAA08409@cj20424-a.reston1.va.home.com>
	<14781.23437.165189.328323@bitdiddle.concentric.net>
Message-ID: <14781.23695.934627.439238@anthem.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy at beopen.com> writes:

    JH> At least, I think that's why I find the use of None confusing.
    JH> I find it hard to make a strong association between None and
    JH> sys.stdout.  In fact, when I was typing this message, I wrote
    JH> it as sys.stderr and only discovered my error upon re-reading
    JH> the initial message.

I think of it more like Vladimir does: "print >>None" should be
analogous to catting to /dev/null.

-Barry



From guido at beopen.com  Tue Sep 12 01:31:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:31:35 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Mon, 11 Sep 2000 18:24:13 -0400."
             <14781.23437.165189.328323@bitdiddle.concentric.net> 
References: <200009081347.PAA13686@python.inrialpes.fr> <200009112314.SAA08409@cj20424-a.reston1.va.home.com>  
            <14781.23437.165189.328323@bitdiddle.concentric.net> 
Message-ID: <200009112331.SAA08558@cj20424-a.reston1.va.home.com>

> >>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:
> 
>   GvR> Don't think of it as
> 
>   GvR>   print >>None, args
> 
>   GvR> Think of it as
> 
>   GvR>   def func(file=None):
>   GvR>     print >>file, args
> 
> Huh?  Don't you mean think of it as:
> 
> def func(file=None):
>     if file is None:
>        import sys
>        print >>sys.stdout, args
>     else:
> 	print >>file, args

I meant what I said.  I meant that you shouldn't think of examples
like the first one (which looks strange, just like "".join(list) does)
but examples like the second one, which (in my eye) make for more
readable and more maintainable code.

> At least, I think that's why I find the use of None confusing.  I find
> it hard to make a strong association between None and sys.stdout.  In
> fact, when I was typing this message, I wrote it as sys.stderr and
> only discovered my error upon re-reading the initial message.

You don't have to make a strong association with sys.stdout.  When the
file expression is None, the whole ">>file, " part disappears!

Note that the writeln() function, proposed by many, would have the
same behavior:

  def writeln(*args, file=None):
      if file is None:
          file = sys.stdout
      ...write args...

I know that's not legal syntax, but that's the closest
approximation.  This is intended to let you specify file=<some file>
and have the default be sys.stdout, but passing an explicit value of
None has the same effect as leaving it out.  This idiom is used in
lots of places!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 01:35:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 11 Sep 2000 18:35:20 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Mon, 11 Sep 2000 18:28:31 -0400."
             <14781.23695.934627.439238@anthem.concentric.net> 
References: <200009081347.PAA13686@python.inrialpes.fr> <200009112314.SAA08409@cj20424-a.reston1.va.home.com> <14781.23437.165189.328323@bitdiddle.concentric.net>  
            <14781.23695.934627.439238@anthem.concentric.net> 
Message-ID: <200009112335.SAA08609@cj20424-a.reston1.va.home.com>

>     JH> At least, I think that's why I find the use of None confusing.
>     JH> I find it hard to make a strong association between None and
>     JH> sys.stdout.  In fact, when I was typing this message, I wrote
>     JH> it as sys.stderr and only discovered my error upon re-reading
>     JH> the initial message.
> 
> I think of it more like Vladimir does: "print >>None" should be
> analogous to catting to /dev/null.

Strong -1 on that.  You can do that with any number of other
approaches.

If, as a result of a misplaced None, output appears at the wrong place
by accident, it's easy to figure out why.  If it disappears
completely, it's a much bigger mystery because you may start
suspecting lots of other places.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 01:22:46 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 01:22:46 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112331.SAA08558@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Sep 11, 2000 06:31:35 PM
Message-ID: <200009112322.BAA29633@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> >   GvR> Don't think of it as
> > 
> >   GvR>   print >>None, args
> > 
> >   GvR> Think of it as
> > 
> >   GvR>   def func(file=None):
> >   GvR>     print >>file, args

I understand that you want me to think this way. But that's not my
intuitive thinking. I would have written your example like this:

def func(file=sys.stdout):
    print >> file, args

This is a clearer, compared to None which is not a file.

> ...  This is intended to let you specify file=<some file>
> and have the default be sys.stdout, but passing an explicit value of
> None has the same effect as leaving it out.  This idiom is used in
> lots of places!

Exactly.
However my expectation would be to leave out the whole print statement.
I think that any specialization of None is mysterious and would be hard
to teach. From this POV, I agree with MAL that raising an exception is
the cleanest and the simplest way to do it. Any specialization of my
thought here is perceived as a burden.

However, if such specialization is desired, I'm certainly closer to
/dev/null than sys.stdout. As long as one starts redirecting output,
I believe that one already has enough knowledge about files, and in
particular about stdin, stdout and stderr. None in the sense of /dev/null
is not so far from that. It is a simple concept. But this is already
"advanced knowledge" about redirecting output on purpose.

So as long as one uses extended print, she's already an advanced user.

From tim_one at email.msn.com  Tue Sep 12 03:27:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 11 Sep 2000 21:27:10 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> ...
> As long as one starts redirecting output, I believe that one already
> has enough knowledge about files, and in particular about stdin,
> stdout and stderr. None in the sense of /dev/null is not so far from
> that.  It is a simple concept. But this is already "advanced
> knowledge" about redirecting output on purpose.

This is so Unix-centric, though; e.g., native windows users have only the
dimmest knowledge of stderr, and almost none of /dev/null.  Which ties in
to:

> So as long as one uses extended print, she's already an advanced user.

Nope!  "Now how did I get this to print to a file instead?" is one of the
faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
a file-like object?  what?! etc").

This is one of those cases where Guido is right, but for reasons nobody can
explain <0.8 wink>.

sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim





From paul at prescod.net  Tue Sep 12 07:34:10 2000
From: paul at prescod.net (Paul Prescod)
Date: Mon, 11 Sep 2000 22:34:10 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <39BDC052.A9FEDE80@prescod.net>

Vladimir Marangozov wrote:
> 
>...
> 
> def func(file=sys.stdout):
>     print >> file, args
> 
> This is a clearer, compared to None which is not a file.

I've gotta say that I agree with you on all issues. If I saw that
file=None stuff in code in another programming language I would expect
it meant send the output nowhere. People who want sys.stdout can get it.
Special cases aren't special enough to break the rules!
-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From effbot at telia.com  Tue Sep 12 09:10:53 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 09:10:53 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr>
Message-ID: <003001c01c88$aad09420$766940d5@hagrid>

Vladimir wrote:
> I understand that you want me to think this way. But that's not my
> intuitive thinking. I would have written your example like this:
> 
> def func(file=sys.stdout):
>     print >> file, args
> 
> This is a clearer, compared to None which is not a file.

Sigh.  You code doesn't work.  Quoting the PEP, from the section
that discusses why passing None is the same thing as passing no
file at all:

    "Note: defaulting the file argument to sys.stdout at compile time
    is wrong, because it doesn't work right when the caller assigns to
    sys.stdout and then uses tables() without specifying the file."

I was sceptical at first, but the more I see of your counter-arguments,
the more I support Guido here.  As he pointed out, None usually means
"pretend I didn't pass this argument" in Python.  No difference here.

+1 on keeping print as it's implemented (None means default).
-1 on making None behave like a NullFile.

</F>




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 16:11:14 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 16:11:14 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com> from "Tim Peters" at Sep 11, 2000 09:27:10 PM
Message-ID: <200009121411.QAA30848@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov]
> > ...
> > As long as one starts redirecting output, I believe that one already
> > has enough knowledge about files, and in particular about stdin,
> > stdout and stderr. None in the sense of /dev/null is not so far from
> > that.  It is a simple concept. But this is already "advanced
> > knowledge" about redirecting output on purpose.
> 
> This is so Unix-centric, though; e.g., native windows users have only the
> dimmest knowledge of stderr, and almost none of /dev/null.

Ok, forget about /dev/null. It was just a spelling of "print to None"
which has a meaning even in spoken English.


> Which ties in to:
> 
> > So as long as one uses extended print, she's already an advanced user.
> 
> Nope!  "Now how did I get this to print to a file instead?" is one of the
> faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
> past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
> a file-like object?  what?! etc").

Look, this is getting silly. You can't align the experienced users' level
of knowledge to the one of newbies. What I'm trying to make clear here is
that you're not disturbing newbies, you're disturbing experienced users
and teachers who are supposed to transmit their knowledge to these newbies.

FWIW, I am one of these teachers and I have had enough classes in this
domain to trust my experience and my judgement on the students' logic
more than Guido's and your's perceptions taken together about *this*
feature in particlar. If you want real feedback from newbies, don't take
c.l.py as the reference -- you'd better go to the nearest school or
University and start teaching.  (how's that as a reply to your attempts
to make me think one way or another or trust abbreviations <0.1 wink>)

As long as you have embarked in the output redirection business, you
have done so explicitely, because you're supposed to understand what it
means and how it works. This is "The Next Level" in knowledge, implying
that whenever you use extended print *explicitely*, you're supposed to
provide explicitely the target of the output.

Reverting that back with None, by saying that "print >> None == print"
is illogical, because you've already engaged in this advanced concept.
Rolling back your explicit decision about dealing with redirected output
with an explicit None (yes, you must provide it explicitely to fall back
to the opriginal behavior) is the wrong path of reasoning.  If you don't
want to redirect output, don't use extended print in the first place.
And if you want to achieve the effect of "simple" print, you should pass
sys.stdout.

I really don't see the point of passing explicitely None instead of
passing sys.stdout, once you've made your decision about redirecting
output. And in this regard, both Guido and you have not provided any
arguments that would make me think that you're probably right.
I understand very well your POV, you don't seem to understand mine.

And let me add to that the following summary: the whole extended
print idea is about convenience. Convenience for those that know
what file redirection is. Not for newbies. You can't argue too much
about extended print as an intuitive concept for newbies. The present
change disturbs experienced users (the >> syntax aside) and you get
signals about that from them, because the current behavior does not
comply with any existing concept as far as file redirection is concerned.
However, since these guys are experienced and knowledgable, they already
understand this game quite well. So what you get is just "Oh really? OK,
this is messy" from the chatty ones and everybody moves on.  The others
just don't care, but they not necessarily agree.

I don't care either, but fact is that I've filled two screens of text
explaining you that you're playing with 2 different knowledge levels.
You shouldn't try to reduce the upper level to the lower one, just because
you think it is more Pythonic for newbies. You'd better take the opposite
direction and raise the newbie stadard to what happens to be a very well
known concept in the area of computer programming, and in CS in gerenal.

To provoke you a bit more, I'll tell you that I see no conceptual difference
between
             print >> None, args

and
             print >> 0, args -or- print >> [], args  -or- print >> "", args

(if you prefer, you can replace (), "", [], etc. with a var name, which can be
 assigned these values)

That is, I don't see a conceptual difference between None and any object
which evaluates to false. However, the latter are not allowed. Funny,
isn't it.  What makes None so special? <wink>

Now, the only argument I got is the one Fredrik has quoted from the PEP,
dealing with passing the default file as a parameter. I'll focus briefly
on it.

[Fredrik]

> [me]
> > def func(file=sys.stdout):
> >     print >> file, args
> > 
> > This is a clearer, compared to None which is not a file.
>
> Sigh.  You code doesn't work.  Quoting the PEP, from the section
> that discusses why passing None is the same thing as passing no
> file at all:
> 
>     "Note: defaulting the file argument to sys.stdout at compile time
>     is wrong, because it doesn't work right when the caller assigns to
>     sys.stdout and then uses tables() without specifying the file."

Of course that it doesn't work if you assign to sys.stdout. But hey,
if you assign to sys.stdout, you know what 'sys' is, what 'sys.stdout' is,
and you know basically everything about std files and output. Don't you?

Anyway, this argument is a flawed, because the above is in no way
different than the issues raised when you define a default argument
which is a list, dict, tuple, etc. Compile time evaluation of default args
is a completely different discussion and extended print has (almost)
nothing to do with that. Guido has made this (strange) association between
two different subjects, which, btw, I perceive as an additional burden.

It is far better to deal with the value of the default argument within
the body of the function: this way, there are no misunderstandings.
None has all the symptoms of a hackish shortcut here.

> 
> This is one of those cases where Guido is right, but for reasons nobody can
> explain <0.8 wink>.

I'm sorry. I think that this is one of those rare cases where he is wrong.
His path of reasoning is less straigtforward, and I can't adopt it. And
it seems like I'm not alone. If you ever see a columnist talking about
Python's features and extended print (mentioning print >> None as a good
thing), please let me know about it.

> 
> sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim
> 

I would have preferred arguments. The PEP and your responses lack them
which is another sign about this feature.


stop-troubadouring-about-blind-BDFL-compliance-in-public'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Tue Sep 12 16:48:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 16:48:11 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009121411.QAA30848@python.inrialpes.fr>
Message-ID: <004801c01cc8$7ed99700$766940d5@hagrid>

> > Sigh.  You code doesn't work.  Quoting the PEP, from the section
> > that discusses why passing None is the same thing as passing no
> > file at all:
> > 
> >     "Note: defaulting the file argument to sys.stdout at compile time
> >     is wrong, because it doesn't work right when the caller assigns to
> >     sys.stdout and then uses tables() without specifying the file."
> 
> Of course that it doesn't work if you assign to sys.stdout. But hey,
> if you assign to sys.stdout, you know what 'sys' is, what 'sys.stdout' is,
> and you know basically everything about std files and output. Don't you?

no.  and since you're so much smarter than everyone else,
you should be able to figure out why.

followups to /dev/null, please.

</F>




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 19:12:04 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 19:12:04 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <004801c01cc8$7ed99700$766940d5@hagrid> from "Fredrik Lundh" at Sep 12, 2000 04:48:11 PM
Message-ID: <200009121712.TAA31347@python.inrialpes.fr>

Fredrik Lundh wrote:
> 
> no.  and since you're so much smarter than everyone else,
> you should be able to figure out why.
> 
> followups to /dev/null, please.

pass


print >> pep-0214.txt, next_argument_if_not_None 'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tismer at appliedbiometrics.com  Tue Sep 12 18:35:13 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Tue, 12 Sep 2000 19:35:13 +0300
Subject: [Python-Dev] Challenge about print >> None
References: <200009112322.BAA29633@python.inrialpes.fr> <003001c01c88$aad09420$766940d5@hagrid>
Message-ID: <39BE5B41.16143E76@appliedbiometrics.com>


Fredrik Lundh wrote:
> 
> Vladimir wrote:
> > I understand that you want me to think this way. But that's not my
> > intuitive thinking. I would have written your example like this:
> >
> > def func(file=sys.stdout):
> >     print >> file, args
> >
> > This is a clearer, compared to None which is not a file.

This is not clearer.
Instead, it is presetting a parameter
with a mutable object - bad practice!

> Sigh.  You code doesn't work.  Quoting the PEP, from the section
> that discusses why passing None is the same thing as passing no
> file at all:
> 
>     "Note: defaulting the file argument to sys.stdout at compile time
>     is wrong, because it doesn't work right when the caller assigns to
>     sys.stdout and then uses tables() without specifying the file."
> 
> I was sceptical at first, but the more I see of your counter-arguments,
> the more I support Guido here.  As he pointed out, None usually means
> "pretend I didn't pass this argument" in Python.  No difference here.
> 
> +1 on keeping print as it's implemented (None means default).
> -1 on making None behave like a NullFile.

Seconded!

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From nascheme at enme.ucalgary.ca  Tue Sep 12 20:03:55 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Tue, 12 Sep 2000 12:03:55 -0600
Subject: [Python-Dev] PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER>; from Brent Fulgham on Tue, Sep 12, 2000 at 10:40:36AM -0700
References: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER>
Message-ID: <20000912120355.A2457@keymaster.enme.ucalgary.ca>

You probably want to address the python-dev mailing list.  I have CCed
this message in the hope that some of the more experienced developers
can help.  The PyWX website is at: http://pywx.idyll.org/.

On Tue, Sep 12, 2000 at 10:40:36AM -0700, Brent Fulgham wrote:
> We've run across some problems with the Python's internal threading
> design, and its handling of module loading.
> 
> The AOLserver plugin spawns new Python interpreter threads to
> service new HTTP connections.  Each thread is theoretically its
> own interpreter, and should have its own namespace, set of loaded
> packages, etc.
> 
> This is largely true, but we run across trouble with the way
> the individual threads handle 'argv' variables and current
> working directory.
> 
> CGI scripts typically pass data as variables to the script
> (as argv).  These (unfortunately) are changed globally across
> all Python interpreter threads, which can cause problems....
> 
> In addition, the current working directory is not unique
> among independent Python interpreters.  So if a script changes
> its directory to something, all other running scripts (in
> unique python interpreter threads) now have their cwd set to
> this directory.
> 
> So we have to address these issues at some point...  Any hope
> that something like this could be fixed in 2.0?

Are you using separate interpreters or one interpreter with multiple
threads?  It sounds like the latter.  If you use the latter than
definately things like the process address space and the current working
directory are shared across the threads.  I don't think I understand
your design.  Can you explain the architecture of PyWX?

  Neil



From brent.fulgham at xpsystems.com  Tue Sep 12 20:18:03 2000
From: brent.fulgham at xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 11:18:03 -0700
Subject: [Python-Dev] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>

> Are you using separate interpreters or one interpreter with multiple
> threads?  It sounds like the latter.  If you use the latter than
> definately things like the process address space and the 
> current working directory are shared across the threads.  I don't 
> think I understand your design.  Can you explain the architecture
> of PyWX?
> 

There are some documents on the website that give a bit more detail,
but in a nutshell we were using the Python interpreter thread concept
(Py_InterpreterNew, etc.) to allow 'independent' interpreters to
service HTTP requests in the server.

We are basically running afoul of the problems with the interpreter
isolation, as documented in the various Python embed docs.

"""Because sub-interpreters (and the main interpreter) are part of
the same process, the insulation between them isn't perfect -- for 
example, using low-level file operations like os.close() they can
(accidentally or maliciously) affect each other's open files. 
Because of the way extensions are shared between (sub-)interpreters,
some extensions may not work properly; this is especially likely
when the extension makes use of (static) global variables, or when
the extension manipulates its module's dictionary after its 
initialization"""

So we are basically stuck.  We can't link against Python multiple
times, so our only avenue to provide multiple interpreter instances
is to use the "Py_InterpreterNew" call and hope for the best.

Any hope for better interpreter isolation in 2.0? (2.1?)

-Brent




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 20:51:21 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 20:51:21 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39BE5B41.16143E76@appliedbiometrics.com> from "Christian Tismer" at Sep 12, 2000 07:35:13 PM
Message-ID: <200009121851.UAA31622@python.inrialpes.fr>

Christian Tismer wrote:
> 
> > Vladimir wrote:
> > > I understand that you want me to think this way. But that's not my
> > > intuitive thinking. I would have written your example like this:
> > >
> > > def func(file=sys.stdout):
> > >     print >> file, args
> > >
> > > This is a clearer, compared to None which is not a file.
> 
> This is not clearer.
> Instead, it is presetting a parameter
> with a mutable object - bad practice!

I think I mentioned that default function args and explicit output
streams are two disjoint issues. In the case of extended print,
half of us perceive that as a mix of concepts unrelated to Python,
the other half sees them as natural for specifying default behavior
in Python. The real challenge about print >> None is that the latter
half would need to explain to the first one (including newcomers with
various backgrounds) that this is natural thinking in Python. I am
sceptical about the results, as long as one has to explain that this
is done on purpose to someone who thinks that this is a mix of concepts.

A naive illustration to the above is that "man fprintf" does not say
that when the stream is NULL, fprintf behaves like printf. Indeed,
fprintf(NULL, args) dumps core. There are two distinct functions for
different things. Either you care and you use fprintf (print >> ),
either you don't care and you use printf (print). Not both. If you
think you can do both in one shot, elaborate on that magic in the PEP.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From cgw at fnal.gov  Tue Sep 12 20:47:31 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 12 Sep 2000 13:47:31 -0500 (CDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
Message-ID: <14782.31299.800325.803340@buffalo.fnal.gov>

Python 1.5.2 (#3, Feb 11 2000, 15:30:14)  [GCC 2.7.2.3.f.1] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import rexec
>>> r = rexec.RExec()
>>> r.r_exec("import re")
>>> 

Python 2.0b1 (#2, Sep  8 2000, 12:10:17) 
[GCC 2.95.2 19991024 (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import rexec
>>> r=rexec.RExec()
>>> r.r_exec("import re")

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/rexec.py", line 253, in r_exec
    exec code in m.__dict__
  File "<string>", line 1, in ?
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/re.py", line 28, in ?
    from sre import *
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/sre.py", line 19, in ?
    import sre_compile
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 432, in find_head_package
    q = self.import_it(head, qname, parent)
  File "/usr/lib/python2.0/ihooks.py", line 485, in import_it
    m = self.loader.load_module(fqname, stuff)
  File "/usr/lib/python2.0/ihooks.py", line 324, in load_module
    exec code in m.__dict__
  File "/usr/lib/python2.0/sre_compile.py", line 11, in ?
    import _sre
  File "/usr/lib/python2.0/rexec.py", line 264, in r_import
    return self.importer.import_module(mname, globals, locals, fromlist)
  File "/usr/lib/python2.0/ihooks.py", line 396, in import_module
    q, tail = self.find_head_package(parent, name)
  File "/usr/lib/python2.0/ihooks.py", line 439, in find_head_package
    raise ImportError, "No module named " + qname
ImportError: No module named _sre

Of course I can work around this by doing:

>>> r.ok_builtin_modules += '_sre',
>>> r.r_exec("import re")          

But I really shouldn't have to do this, right?  _sre is supposed to be
a low-level implementation detail.  I think I should still be able to 
"import re" in an restricted environment without having to be aware of
_sre.



From effbot at telia.com  Tue Sep 12 21:12:20 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:12:20 +0200
Subject: [Python-Dev] urllib problems under 2.0
Message-ID: <005e01c01ced$6bb19180$766940d5@hagrid>

the proxy code in 2.0b1's new urllib is broken on my box.

here's the troublemaker:

                proxyServer = str(_winreg.QueryValueEx(internetSettings,
                                                       'ProxyServer')[0])
                if ';' in proxyServer:        # Per-protocol settings
                    for p in proxyServer.split(';'):
                        protocol, address = p.split('=')
                        proxies[protocol] = '%s://%s' % (protocol, address)
                else:        # Use one setting for all protocols
                    proxies['http'] = 'http://%s' % proxyServer
                    proxies['ftp'] = 'ftp://%s' % proxyServer

now, on my box, the proxyServer string is "https=127.0.0.1:1080"
(an encryption proxy used by my bank), so the above code happily
creates the following proxy dictionary:

proxy = {
    "http": "http://https=127.0.0.1:1080"
    "ftp": "http://https=127.0.0.1:1080"
}

which, of course, results in a "host not found" no matter what URL
I pass to urllib...

:::

a simple fix would be to change the initial test to:

                if "=" in proxyServer:

does anyone have a better idea, or should I check this one
in right away?

</F>




From titus at caltech.edu  Tue Sep 12 21:14:12 2000
From: titus at caltech.edu (Titus Brown)
Date: Tue, 12 Sep 2000 12:14:12 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>; from brent.fulgham@xpsystems.com on Tue, Sep 12, 2000 at 11:18:03AM -0700
References: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>
Message-ID: <20000912121412.B6850@cns.caltech.edu>

-> > Are you using separate interpreters or one interpreter with multiple
-> > threads?  It sounds like the latter.  If you use the latter than
-> > definately things like the process address space and the 
-> > current working directory are shared across the threads.  I don't 
-> > think I understand your design.  Can you explain the architecture
-> > of PyWX?
-> > 
-> 
-> """Because sub-interpreters (and the main interpreter) are part of
-> the same process, the insulation between them isn't perfect -- for 
-> example, using low-level file operations like os.close() they can
-> (accidentally or maliciously) affect each other's open files. 
-> Because of the way extensions are shared between (sub-)interpreters,
-> some extensions may not work properly; this is especially likely
-> when the extension makes use of (static) global variables, or when
-> the extension manipulates its module's dictionary after its 
-> initialization"""
-> 
-> So we are basically stuck.  We can't link against Python multiple
-> times, so our only avenue to provide multiple interpreter instances
-> is to use the "Py_InterpreterNew" call and hope for the best.
-> 
-> Any hope for better interpreter isolation in 2.0? (2.1?)

Perhaps a better question is: is there any way to get around these problems
without moving from a threaded model (which we like) to a process model?

Many of the problems we're running into because of this lack of interpreter
isolation are due to the UNIX threading model, as I see it.  For example,
the low-level file operation interference, cwd problems, and environment
variable problems are all caused by UNIX's determination to share this stuff
across threads.  I don't see any way of changing this without causing far
more problems than we fix.

cheers,
--titus



From effbot at telia.com  Tue Sep 12 21:34:58 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:34:58 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <200009121851.UAA31622@python.inrialpes.fr>
Message-ID: <006e01c01cf0$921a4da0$766940d5@hagrid>

vladimir wrote:
> In the case of extended print, half of us perceive that as a mix of
> concepts unrelated to Python, the other half sees them as natural
> for specifying default behavior in Python.

Sigh.  None doesn't mean "default", it means "doesn't exist"
"nothing" "ingenting" "nada" "none" etc.

"def foo(): return" uses None to indicate that there was no
return value.

"map(None, seq)" uses None to indicate that there are really
no function to map things through.

"import" stores None in sys.modules to indicate that certain
package components doesn't exist.

"print >>None, value" uses None to indicate that there is
really no redirection -- in other words, the value is printed
in the usual location.

</None>




From effbot at telia.com  Tue Sep 12 21:40:04 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 21:40:04 +0200
Subject: [Python-Dev] XML runtime errors?
Message-ID: <009601c01cf1$467458e0$766940d5@hagrid>

stoopid question: why the heck is xmllib using
"RuntimeError" to flag XML syntax errors?

raise RuntimeError, 'Syntax error at line %d: %s' % (self.lineno, message)

what's wrong with "SyntaxError"?

</F>




From Vladimir.Marangozov at inrialpes.fr  Tue Sep 12 21:43:32 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 12 Sep 2000 21:43:32 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <006e01c01cf0$921a4da0$766940d5@hagrid> from "Fredrik Lundh" at Sep 12, 2000 09:34:58 PM
Message-ID: <200009121943.VAA31771@python.inrialpes.fr>

Fredrik Lundh wrote:
> 
> vladimir wrote:
> > In the case of extended print, half of us perceive that as a mix of
> > concepts unrelated to Python, the other half sees them as natural
> > for specifying default behavior in Python.
> 
> Sigh.  None doesn't mean "default", it means "doesn't exist"
> "nothing" "ingenting" "nada" "none" etc.
> 
> "def foo(): return" uses None to indicate that there was no
> return value.
> 
> "map(None, seq)" uses None to indicate that there are really
> no function to map things through.
> 
> "import" stores None in sys.modules to indicate that certain
> package components doesn't exist.
> 
> "print >>None, value" uses None to indicate that there is
> really no redirection -- in other words, the value is printed
> in the usual location.

PEP that without the import example (it's obfuscated). If you can add
more of them, you'll save yourself time answering questions. I couldn't
have done it, because I still belong to my half <wink>.

hard-to-make-progress-but-constructivism-wins-in-the-end'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Tue Sep 12 23:46:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:46:32 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Tue, 12 Sep 2000 12:14:12 MST."
             <20000912121412.B6850@cns.caltech.edu> 
References: <EDFD2A95EE7DD31187350090279C6767E45A09@THRESHER>  
            <20000912121412.B6850@cns.caltech.edu> 
Message-ID: <200009122146.QAA01374@cj20424-a.reston1.va.home.com>

> > This is largely true, but we run across trouble with the way
> > the individual threads handle 'argv' variables and current
> > working directory.
> > 
> > CGI scripts typically pass data as variables to the script
> > (as argv).  These (unfortunately) are changed globally across
> > all Python interpreter threads, which can cause problems....
> > 
> > In addition, the current working directory is not unique
> > among independent Python interpreters.  So if a script changes
> > its directory to something, all other running scripts (in
> > unique python interpreter threads) now have their cwd set to
> > this directory.

There's no easy way to fix the current directory problem.  Just tell
your CGI programmers that os.chdir() is off-limits; you may remove it
from the os module (and from the posix module) during initialization
of your interpreter to enforce this.

I don't understand how you would be sharing sys.argv between multiple
interpreters.  Sure, the initial sys.argv is the same (they all
inherit that from the C main()) but after that you can set it to
whatever you want and they should not be shared.

Are you *sure* you are using PyInterpreterState_New() and not just
creating new threads?

> -> So we are basically stuck.  We can't link against Python multiple
> -> times, so our only avenue to provide multiple interpreter instances
> -> is to use the "Py_InterpreterNew" call and hope for the best.
> -> 
> -> Any hope for better interpreter isolation in 2.0? (2.1?)
> 
> Perhaps a better question is: is there any way to get around these problems
> without moving from a threaded model (which we like) to a process model?
> 
> Many of the problems we're running into because of this lack of interpreter
> isolation are due to the UNIX threading model, as I see it.  For example,
> the low-level file operation interference, cwd problems, and environment
> variable problems are all caused by UNIX's determination to share this stuff
> across threads.  I don't see any way of changing this without causing far
> more problems than we fix.

That's the whole point of using threads -- they share as much state as
they can.  I don't see how you can do better without going to
processes.  You could perhaps maintain the illusion of a per-thread
current directory, but you'd have to modify every function that uses
pathnames to take the simulated pwd into account...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 23:48:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:48:47 -0500
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: Your message of "Tue, 12 Sep 2000 13:47:31 EST."
             <14782.31299.800325.803340@buffalo.fnal.gov> 
References: <14782.31299.800325.803340@buffalo.fnal.gov> 
Message-ID: <200009122148.QAA01404@cj20424-a.reston1.va.home.com>

> Python 1.5.2 (#3, Feb 11 2000, 15:30:14)  [GCC 2.7.2.3.f.1] on linux2
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> import rexec
> >>> r = rexec.RExec()
> >>> r.r_exec("import re")
> >>> 
> 
> Python 2.0b1 (#2, Sep  8 2000, 12:10:17) 
> [GCC 2.95.2 19991024 (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import rexec
> >>> r=rexec.RExec()
> >>> r.r_exec("import re")
> 
> Traceback (most recent call last):
[...]
> ImportError: No module named _sre
> 
> Of course I can work around this by doing:
> 
> >>> r.ok_builtin_modules += '_sre',
> >>> r.r_exec("import re")          
> 
> But I really shouldn't have to do this, right?  _sre is supposed to be
> a low-level implementation detail.  I think I should still be able to 
> "import re" in an restricted environment without having to be aware of
> _sre.

The rexec.py module needs to be fixed.  Should be simple enough.
There may be other modules that it should allow too!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 23:52:45 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:52:45 -0500
Subject: [Python-Dev] urllib problems under 2.0
In-Reply-To: Your message of "Tue, 12 Sep 2000 21:12:20 +0200."
             <005e01c01ced$6bb19180$766940d5@hagrid> 
References: <005e01c01ced$6bb19180$766940d5@hagrid> 
Message-ID: <200009122152.QAA01423@cj20424-a.reston1.va.home.com>

> the proxy code in 2.0b1's new urllib is broken on my box.

Before you fix this, let's figure out what the rules for proxy
settings in the registry are supposed to be, and document these.
How do these get set?

(This should also be documented for Unix if it isn't already; problems
with configuring proxies are ever-recurring questions it seems.  I
haven't used a proxy in years so I'm not good at fixing it... :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Sep 12 23:55:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 16:55:48 -0500
Subject: [Python-Dev] XML runtime errors?
In-Reply-To: Your message of "Tue, 12 Sep 2000 21:40:04 +0200."
             <009601c01cf1$467458e0$766940d5@hagrid> 
References: <009601c01cf1$467458e0$766940d5@hagrid> 
Message-ID: <200009122155.QAA01452@cj20424-a.reston1.va.home.com>

[/F]
> stoopid question: why the heck is xmllib using
> "RuntimeError" to flag XML syntax errors?

Because it's too cheap to declare its own exception?

> raise RuntimeError, 'Syntax error at line %d: %s' % (self.lineno, message)
> 
> what's wrong with "SyntaxError"?

That would be the wrong exception unless it's parsing Python source
code.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Tue Sep 12 22:56:10 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 12 Sep 2000 16:56:10 -0400
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <200009122148.QAA01404@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Tue, Sep 12, 2000 at 04:48:47PM -0500
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com>
Message-ID: <20000912165610.A554@kronos.cnri.reston.va.us>

On Tue, Sep 12, 2000 at 04:48:47PM -0500, Guido van Rossum wrote:
>The rexec.py module needs to be fixed.  Should be simple enough.
>There may be other modules that it should allow too!

Are we sure that it's not possible to engineer segfaults or other
nastiness by deliberately feeding _sre bad data?  This was my primary
reason for not exposing the PCRE bytecode interface, since it would
have been difficult to make the code robust against hostile bytecodes.

--amk



From guido at beopen.com  Wed Sep 13 00:27:01 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 17:27:01 -0500
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: Your message of "Tue, 12 Sep 2000 16:56:10 -0400."
             <20000912165610.A554@kronos.cnri.reston.va.us> 
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com>  
            <20000912165610.A554@kronos.cnri.reston.va.us> 
Message-ID: <200009122227.RAA01676@cj20424-a.reston1.va.home.com>

[AMK]
> Are we sure that it's not possible to engineer segfaults or other
> nastiness by deliberately feeding _sre bad data?  This was my primary
> reason for not exposing the PCRE bytecode interface, since it would
> have been difficult to make the code robust against hostile bytecodes.

Good point!

But how do we support using the re module in restricted mode then?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Tue Sep 12 23:26:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 12 Sep 2000 16:26:49 -0500 (CDT)
Subject: [Python-Dev] urllib problems under 2.0
In-Reply-To: <200009122152.QAA01423@cj20424-a.reston1.va.home.com>
References: <005e01c01ced$6bb19180$766940d5@hagrid>
	<200009122152.QAA01423@cj20424-a.reston1.va.home.com>
Message-ID: <14782.40857.437768.652808@beluga.mojam.com>

    Guido> (This should also be documented for Unix if it isn't already;
    Guido> problems with configuring proxies are ever-recurring questions it
    Guido> seems.  I haven't used a proxy in years so I'm not good at fixing
    Guido> it... :-)

Under Unix, proxy server specifications are simply URLs (or URIs?) that
specify a protocol ("scheme" in urlparse parlance), a host and (usually) a
port, e.g.:

    http_proxy='http://manatee.mojam.com:3128' ; export http_proxy

I've been having an ongoing discussion with a Windows user who seems to be
stumbling upon the same problem that Fredrik encountered.  If I read the
urllib.getproxies_registry code correctly, it looks like it's expecting a
string that doesn't include a protocol, e.g. simply
"manatee.mojam.com:3128".  This seems a bit inflexible to me, since you
might want to offer multiprotocol proxies through a single URI (though that
may well be what Windows offers its users).  For instance, I believe Squid
will proxy both ftp and http requests via HTTP.  Requiring ftp proxies to do
so via ftp seems inflexible.  My thought (and I can't test this) is that the
code around urllib.py line 1124 should be

                else:        # Use one setting for all protocols
                    proxies['http'] = proxyServer
                    proxies['ftp'] = proxyServer

but that's just a guess based upon the values this other fellow has sent me
and assumes that the Windows registry is supposed to hold proxy informations
that contains the protocol.  I cc'd Mark Hammond on my last email to the
user.  Perhaps he'll have something interesting to say when he gets up.

Skip



From fdrake at beopen.com  Tue Sep 12 23:26:17 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 12 Sep 2000 17:26:17 -0400 (EDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <200009122227.RAA01676@cj20424-a.reston1.va.home.com>
References: <14782.31299.800325.803340@buffalo.fnal.gov>
	<200009122148.QAA01404@cj20424-a.reston1.va.home.com>
	<20000912165610.A554@kronos.cnri.reston.va.us>
	<200009122227.RAA01676@cj20424-a.reston1.va.home.com>
Message-ID: <14782.40825.627148.54355@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > But how do we support using the re module in restricted mode then?

  Perhaps providing a bastion wrapper around the re module, which
would allow the implementation details to be completely hidden?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 12 23:50:53 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 12 Sep 2000 23:50:53 +0200
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
References: <14782.31299.800325.803340@buffalo.fnal.gov> <200009122148.QAA01404@cj20424-a.reston1.va.home.com> <20000912165610.A554@kronos.cnri.reston.va.us>
Message-ID: <01d701c01d03$86dfdfa0$766940d5@hagrid>

andrew wrote:
> Are we sure that it's not possible to engineer segfaults or other
> nastiness by deliberately feeding _sre bad data?

it's pretty easy to trick _sre into reading from the wrong place
(however, it shouldn't be possible to return such data to the
Python level, and you can write into arbitrary locations).

fixing this would probably hurt performance, but I can look into it.

can the Bastion module be used to wrap entire modules?

</F>




From effbot at telia.com  Wed Sep 13 00:01:36 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 13 Sep 2000 00:01:36 +0200
Subject: [Python-Dev] XML runtime errors?
References: <009601c01cf1$467458e0$766940d5@hagrid>  <200009122155.QAA01452@cj20424-a.reston1.va.home.com>
Message-ID: <01f701c01d05$0aa98e20$766940d5@hagrid>

> [/F]
> > stoopid question: why the heck is xmllib using
> > "RuntimeError" to flag XML syntax errors?
> 
> Because it's too cheap to declare its own exception?

how about adding:

    class XMLError(RuntimeError):
        pass

(and maybe one or more XMLError subclasses?)

> > what's wrong with "SyntaxError"?
> 
> That would be the wrong exception unless it's parsing Python source
> code.

gotta fix netrc.py then...

</F>




From gstein at lyra.org  Tue Sep 12 23:50:54 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 12 Sep 2000 14:50:54 -0700
Subject: [Python-Dev] PyWX (Python AOLserver plugin)
In-Reply-To: <20000912120355.A2457@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Tue, Sep 12, 2000 at 12:03:55PM -0600
References: <EDFD2A95EE7DD31187350090279C6767E459CE@THRESHER> <20000912120355.A2457@keymaster.enme.ucalgary.ca>
Message-ID: <20000912145053.B22138@lyra.org>

On Tue, Sep 12, 2000 at 12:03:55PM -0600, Neil Schemenauer wrote:
>...
> On Tue, Sep 12, 2000 at 10:40:36AM -0700, Brent Fulgham wrote:
>...
> > This is largely true, but we run across trouble with the way
> > the individual threads handle 'argv' variables and current
> > working directory.

Are you using Py_NewInterpreter? If so, then it will use the same argv
across all interpreters that it creates. Use PyInterpreterState_New, you
have finer-grained control of what goes into an interpreter/thread state
pair.

> > CGI scripts typically pass data as variables to the script
> > (as argv).  These (unfortunately) are changed globally across
> > all Python interpreter threads, which can cause problems....

They're sharing a list, I believe. See above.

This will definitely be true if you have a single interpreter and multiple
thread states.

> > In addition, the current working directory is not unique
> > among independent Python interpreters.  So if a script changes
> > its directory to something, all other running scripts (in
> > unique python interpreter threads) now have their cwd set to
> > this directory.

As pointed out elsewhere, this is a factor of the OS, not Python. And
Python's design really isn't going to attempt to address this (it really
doesn't make much sense to change these semantics).

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From fdrake at beopen.com  Tue Sep 12 23:51:09 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 12 Sep 2000 17:51:09 -0400 (EDT)
Subject: [Python-Dev] New Python 2.0 documentation packages
Message-ID: <14782.42317.633120.757620@cj42289-a.reston1.va.home.com>

  I've just released a new version of the documentation packages for
the Python 2.0 beta 1 release.  These are versioned 2.0b1.1 and dated
today.  These include a variety of small improvements and additions,
but the big deal is:

    The Module Index is back!

  Pick it up at your friendly Python headquarters:

    http://www.pythonlabs.com/tech/python2.0/


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From brent.fulgham at xpsystems.com  Tue Sep 12 23:55:10 2000
From: brent.fulgham at xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 14:55:10 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>

> There's no easy way to fix the current directory problem.  Just tell
> your CGI programmers that os.chdir() is off-limits; you may remove it
> from the os module (and from the posix module) during initialization
> of your interpreter to enforce this.
>

This is probably a good idea.
 
[ ... snip ... ]

> Are you *sure* you are using PyInterpreterState_New() and not just
> creating new threads?
>
Yes.
 
[ ... snip ... ]

> > Many of the problems we're running into because of this 
> > lack of interpreter isolation are due to the UNIX threading 
> > model, as I see it. 

Titus -- any chance s/UNIX/pthreads/ ?  I.e., would using something
like AOLserver's threading libraries help by providing more
thread-local storage in which to squirrel away various environment
data, dictionaries, etc.?

> > For example, the low-level file operation interference, 
> > cwd problems, and environment variable problems are all caused 
> > by UNIX's determination to share this stuff across threads.  
> > I don't see any way of changing this without causing far
> > more problems than we fix.
> 
> That's the whole point of using threads -- they share as much state as
> they can.  I don't see how you can do better without going to
> processes.  You could perhaps maintain the illusion of a per-thread
> current directory, but you'd have to modify every function that uses
> pathnames to take the simulated pwd into account...
> 

I think we just can't be all things to all people, which is a point
Michael has patiently been making this whole time.  I propose:

1.  We disable os.chdir in PyWX initialization.
2.  We assume "standard" CGI behavior of CGIDIR being a single 
directory that all CGI's share.
3.  We address sys.argv (is this just a bug on our part maybe?)
4.  Can we address the os.environ leak similarly?  I'm trying to 
think of cases where a CGI really should be allowed to add to
the environment.  Maybe someone needs to set an environment variable
used by some other program that will be run in a subshell.  If
so, maybe we can somehow serialize activities that modify os.environ
in this way?

Idea:  If Python forks a subshell, it inherits the parent
process's environment.  That's probably the only time we really want
to let someone modify the os.environ -- so it can be passed to
a child.  What if we serialized through the fork somehow like so:

1.  Python script wants to set environment, makes call to os.environ
1a. We serialize here, so now we are single-threaded
2.  Script forks a subshell.
2b. We remove the entry we just added and release mutex.
3.  Execution continues.

This probably still won't work because the script might now expect
these variables to be in the environment dictionary.

Perhaps we can dummy up a fake os.environ dictionary per interpreter
thread that doesn't actually change the true UNIX environment?

What do you guys think...

Thanks,

-Brent



From cgw at fnal.gov  Tue Sep 12 23:57:51 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 12 Sep 2000 16:57:51 -0500 (CDT)
Subject: [Python-Dev] Unexpected rexec behavior due to _sre
In-Reply-To: <20000912165610.A554@kronos.cnri.reston.va.us>
References: <14782.31299.800325.803340@buffalo.fnal.gov>
	<200009122148.QAA01404@cj20424-a.reston1.va.home.com>
	<20000912165610.A554@kronos.cnri.reston.va.us>
Message-ID: <14782.42719.159114.708604@buffalo.fnal.gov>

Andrew Kuchling writes:
 > On Tue, Sep 12, 2000 at 04:48:47PM -0500, Guido van Rossum wrote:
 > >The rexec.py module needs to be fixed.  Should be simple enough.
 > >There may be other modules that it should allow too!
 > 
 > Are we sure that it's not possible to engineer segfaults or other
 > nastiness by deliberately feeding _sre bad data?  This was my primary
 > reason for not exposing the PCRE bytecode interface, since it would
 > have been difficult to make the code robust against hostile bytecodes.

If it used to be OK to "import re" in restricted mode, and now it
isn't, then this is an incompatible change and needs to be documented.
There are people running webservers and stuff who are counting on
being able to use the re module in restricted mode.




From brent.fulgham at xpsystems.com  Tue Sep 12 23:58:40 2000
From: brent.fulgham at xpsystems.com (Brent Fulgham)
Date: Tue, 12 Sep 2000 14:58:40 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
Message-ID: <EDFD2A95EE7DD31187350090279C6767E45B23@THRESHER>

> > Are you *sure* you are using PyInterpreterState_New() and not just
> > creating new threads?
> >
> Yes.
>  
Hold on.  This may be our error.

And I'm taking this traffic off python-dev now.  Thanks for 
all the helpful comments!

Regards,

-Brent



From guido at beopen.com  Wed Sep 13 01:07:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 12 Sep 2000 18:07:40 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Tue, 12 Sep 2000 14:55:10 MST."
             <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER> 
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER> 
Message-ID: <200009122307.SAA02146@cj20424-a.reston1.va.home.com>

> 3.  We address sys.argv (is this just a bug on our part maybe?)

Probably.  The variables are not shared -- thir initial values are the
same.

> 4.  Can we address the os.environ leak similarly?  I'm trying to 
> think of cases where a CGI really should be allowed to add to
> the environment.  Maybe someone needs to set an environment variable
> used by some other program that will be run in a subshell.  If
> so, maybe we can somehow serialize activities that modify os.environ
> in this way?

You each get a copy of os.environ.

Running things in subshells from threads is asking for trouble!

But if you have to, you can write your own os.system() substitute that
uses os.execve() -- this allows you to pass in the environment
explicitly.

You may have to take out (override) the code that automatically calls
os.putenv() when an assignment into os.environment is made.

> Idea:  If Python forks a subshell, it inherits the parent
> process's environment.  That's probably the only time we really want
> to let someone modify the os.environ -- so it can be passed to
> a child.  What if we serialized through the fork somehow like so:
> 
> 1.  Python script wants to set environment, makes call to os.environ
> 1a. We serialize here, so now we are single-threaded
> 2.  Script forks a subshell.
> 2b. We remove the entry we just added and release mutex.
> 3.  Execution continues.
> 
> This probably still won't work because the script might now expect
> these variables to be in the environment dictionary.
> 
> Perhaps we can dummy up a fake os.environ dictionary per interpreter
> thread that doesn't actually change the true UNIX environment?

See above.  You can do it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jcollins at pacificnet.net  Wed Sep 13 02:05:03 2000
From: jcollins at pacificnet.net (jcollins at pacificnet.net)
Date: Tue, 12 Sep 2000 17:05:03 -0700 (PDT)
Subject: [Python-Dev] New Python 2.0 documentation packages
In-Reply-To: <14782.42317.633120.757620@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009121659550.995-100000@euclid.endtech.com>

Could you also include the .info files?  I have tried unsuccessfully to
build the .info files in the distribution.  Here is the output from make:

<stuff deleted>
make[2]: Leaving directory `/home/collins/Python-2.0b1/Doc/html'
make[1]: Leaving directory `/home/collins/Python-2.0b1/Doc'
../tools/mkinfo ../html/api/api.html
perl -I/home/collins/Python-2.0b1/Doc/tools
/home/collins/Python-2.0b1/Doc/tools/html2texi.pl
/home/collins/Python-2.0b1/Doc/html/api/api.html
<CODE>
  "__all__"
Expected string content of <A> in <DT>: HTML::Element=HASH(0x8241fbc) at
/home/collins/Python-2.0b1/Doc/tools/html2texi.pl line 550.
make: *** [python-api.info] Error 255


Thanks,

Jeff



On Tue, 12 Sep 2000, Fred L. Drake, Jr. wrote:

> 
>   I've just released a new version of the documentation packages for
> the Python 2.0 beta 1 release.  These are versioned 2.0b1.1 and dated
> today.  These include a variety of small improvements and additions,
> but the big deal is:
> 
>     The Module Index is back!
> 
>   Pick it up at your friendly Python headquarters:
> 
>     http://www.pythonlabs.com/tech/python2.0/
> 
> 
>   -Fred
> 
> 




From greg at cosc.canterbury.ac.nz  Wed Sep 13 03:20:06 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 13 Sep 2000 13:20:06 +1200 (NZST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <006e01c01cf0$921a4da0$766940d5@hagrid>
Message-ID: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <effbot at telia.com>:

> "map(None, seq)" uses None to indicate that there are really
> no function to map things through.

This one is just as controversial as print>>None. I would
argue that it *doesn't* mean "no function", because that
doesn't make sense -- there always has to be *some* function.
It really means "use a default function which constructs
a tuple from its arguments".

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From mhagger at alum.mit.edu  Wed Sep 13 07:08:57 2000
From: mhagger at alum.mit.edu (Michael Haggerty)
Date: Wed, 13 Sep 2000 01:08:57 -0400 (EDT)
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
Message-ID: <14783.3049.364561.641240@freak.kaiserty.com>

Brent Fulgham writes:
> Titus -- any chance s/UNIX/pthreads/ ?  I.e., would using something
> like AOLserver's threading libraries help by providing more
> thread-local storage in which to squirrel away various environment
> data, dictionaries, etc.?

The problem isn't a lack of thread-local storage.  The problem is that
*everything* in unix assumes a single environment and a single PWD.
Of course we could emulate a complete unix-like virtual machine within
every thread :-)

> Idea:  If Python forks a subshell, it inherits the parent
> process's environment.  That's probably the only time we really want
> to let someone modify the os.environ -- so it can be passed to
> a child.

Let's set os.environ to a normal dict (i.e., break the connection to
the process's actual environment) initialized to the contents of the
environment.  This fake environment can be passed to a child using
execve.  We would have to override os.system() and its cousins to use
execve with this fake environment.

We only need to figure out:

1. Whether we can just assign a dict to os.environ (and
   posix.environ?) to kill their special behaviors;

2. Whether such changes can be made separately in each interpreter
   without them affecting one another;

3. Whether special measures have to be taken to cause the fake
   environment dictionary to be garbage collected when the interpreter
   is destroyed.

Regarding PWD there's nothing we can realistically do except document
this limitation and clobber os.chdir() as suggested by Guido.

Michael

--
Michael Haggerty
mhagger at alum.mit.edu



From just at letterror.com  Wed Sep 13 10:33:15 2000
From: just at letterror.com (Just van Rossum)
Date: Wed, 13 Sep 2000 09:33:15 +0100
Subject: [Python-Dev] Challenge about print >> None
Message-ID: <l03102802b5e4e70319fa@[193.78.237.174]>

Vladimir Marangozov wrote:
>And let me add to that the following summary: the whole extended
>print idea is about convenience. Convenience for those that know
>what file redirection is. Not for newbies. You can't argue too much
>about extended print as an intuitive concept for newbies.

That's exactly what disturbs me, too. The main reason for the extended
print statement is to make it easier for newbies to solve this problem "ok,
now how do I print to a file other than sys.stdout?". The main flaw in this
reasoning is that a newbie doesn't neccesarily realize that when you print
something to the screen it actually goes through a _file_ object, so is
unlikely to ask that question. Or the other way round: someone asking that
question can hardly be considered a newbie. It takes quite a bit of
learning before someone can make the step from "a file is a thing on my
hard drive that stores data" to "a file is an abstract stream object". And
once you've made that step you don't really need extended print statement
that badly anymore.

>The present
>change disturbs experienced users (the >> syntax aside) and you get
>signals about that from them, because the current behavior does not
>comply with any existing concept as far as file redirection is concerned.
>However, since these guys are experienced and knowledgable, they already
>understand this game quite well. So what you get is just "Oh really? OK,
>this is messy" from the chatty ones and everybody moves on.  The others
>just don't care, but they not necessarily agree.

Amen.

Just





From guido at beopen.com  Wed Sep 13 14:57:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 13 Sep 2000 07:57:03 -0500
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: Your message of "Wed, 13 Sep 2000 01:08:57 -0400."
             <14783.3049.364561.641240@freak.kaiserty.com> 
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>  
            <14783.3049.364561.641240@freak.kaiserty.com> 
Message-ID: <200009131257.HAA04051@cj20424-a.reston1.va.home.com>

> Let's set os.environ to a normal dict (i.e., break the connection to
> the process's actual environment) initialized to the contents of the
> environment.  This fake environment can be passed to a child using
> execve.  We would have to override os.system() and its cousins to use
> execve with this fake environment.
> 
> We only need to figure out:
> 
> 1. Whether we can just assign a dict to os.environ (and
>    posix.environ?) to kill their special behaviors;

You only need to assign to os.environ; posix.environ is not magic.

> 2. Whether such changes can be made separately in each interpreter
>    without them affecting one another;

Yes -- each interpreter (if you use NewInterpreter or whatever) has
its own copy of the os module.

> 3. Whether special measures have to be taken to cause the fake
>    environment dictionary to be garbage collected when the interpreter
>    is destroyed.

No.

> Regarding PWD there's nothing we can realistically do except document
> this limitation and clobber os.chdir() as suggested by Guido.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gvwilson at nevex.com  Wed Sep 13 14:58:58 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Wed, 13 Sep 2000 08:58:58 -0400 (EDT)
Subject: [Python-Dev] Academic Paper on Open Source
Message-ID: <Pine.LNX.4.10.10009130854520.2281-100000@akbar.nevex.com>

Yutaka Yamauchi has written an academic paper about Open Source
development methodology based in part on studying the GCC project:

http://www.lab7.kuis.kyoto-u.ac.jp/~yamauchi/papers/yamauchi_cscw2000.pdf

Readers of this list may find it interesting...

Greg
http://www.software-carpentry.com




From jack at oratrix.nl  Wed Sep 13 15:11:07 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 13 Sep 2000 15:11:07 +0200
Subject: [Python-Dev] Need some hands to debug MacPython installer 
In-Reply-To: Message by Charles G Waldman <cgw@fnal.gov> ,
	     Fri, 8 Sep 2000 18:41:12 -0500 (CDT) , <14777.31000.382351.905418@buffalo.fnal.gov> 
Message-ID: <20000913131108.2F151303181@snelboot.oratrix.nl>

Charles,
sorry, I didn't see your message until now. Could you give me some information 
on the configuration of the mac involved? Ideally the output of "Apple System 
Profiler", which will be in the Apple-menu if you have it. It appears, though, 
that you're running an old MacOS, in which case you may not have it. Then what 
I'd like to know is the machine type, OS version, amount of memory.

> I am not a Mac user but I saw your posting and my wife has a Mac so I
> decided to give it a try. 
> 
> When I ran the installer, a lot of the text referred to "Python 1.6"
> despite this being a 2.0 installer.
> 
> As the install completed I got a message:  
> 
>  The application "Configure Python" could not be opened because
>  "OTInetClientLib -- OTInetGetSecondaryAddresses" could not be found
> 
> After that, if I try to bring up PythonIDE or PythonInterpreter by
> clicking on the 16-ton icons, I get the same message about
> OTInetGetSecondaryAddresses.  So I'm not able to run Python at all
> right now on this Mac.
> 

--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From Vladimir.Marangozov at inrialpes.fr  Wed Sep 13 15:58:53 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Wed, 13 Sep 2000 15:58:53 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <l03102802b5e4e70319fa@[193.78.237.174]> from "Just van Rossum" at Sep 13, 2000 09:33:15 AM
Message-ID: <200009131358.PAA01096@python.inrialpes.fr>

Just van Rossum wrote:
> 
> Amen.
> 

The good thing is that we discussed this relatively in time. Like other
minor existing Python features, this one is probably going to die in
a dark corner due to the following conclusions:

1. print >> None generates multiple interpretations. It doesn't really
   matter which one is right or wrong. There is confusion. Face it.

2. For many users, "print >>None makes the '>>None' part disappear"
   is perceived as too magic and inconsistent in the face of general
   public knowledge on redirecting output. Honor that opinion.

3. Any specialization of None is bad. None == sys.stdout is no better
   than None == NullFile. A bug in users code may cause passing None
   which will dump the output to stdout, while it's meant to go into
   a file (say, a web log). This would be hard to catch and once this
   bites you, you'll start adding extra checks to make sure you're not
   passing None. (IOW, the same -1 on NullFile applies to sys.stdout)

A safe recommendation is to back this out and make it raise an exception.
No functionality of _extended_ print is lost.

whatever-the-outcome-is,-update-the-PEP'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From DavidA at ActiveState.com  Wed Sep 13 18:24:12 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 13 Sep 2000 09:24:12 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.WNT.4.21.0009130921340.1496-100000@loom>

On Wed, 13 Sep 2000, Greg Ewing wrote:

> Fredrik Lundh <effbot at telia.com>:
> 
> > "map(None, seq)" uses None to indicate that there are really
> > no function to map things through.
> 
> This one is just as controversial as print>>None. I would
> argue that it *doesn't* mean "no function", because that
> doesn't make sense -- there always has to be *some* function.
> It really means "use a default function which constructs
> a tuple from its arguments".

Agreed. To take another example which I also find 'warty', 

	string.split(foo, None, 3)

doesn't mean "use no separators" it means "use whitespace separators which
can't be defined in a single string".

Thus, FWIW, I'm -1 on the >>None construct.  I'll have a hard time
teaching it, and I'll recommend against using it (unless and until
convinced otherwise, of course).

--david




From titus at caltech.edu  Wed Sep 13 19:09:42 2000
From: titus at caltech.edu (Titus Brown)
Date: Wed, 13 Sep 2000 10:09:42 -0700
Subject: [Python-Dev] Re: [PyWX] RE: PyWX (Python AOLserver plugin)
In-Reply-To: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>; from brent.fulgham@xpsystems.com on Tue, Sep 12, 2000 at 02:55:10PM -0700
References: <EDFD2A95EE7DD31187350090279C6767E45B1C@THRESHER>
Message-ID: <20000913100942.G10010@cns.caltech.edu>

-> > There's no easy way to fix the current directory problem.  Just tell
-> > your CGI programmers that os.chdir() is off-limits; you may remove it
-> > from the os module (and from the posix module) during initialization
-> > of your interpreter to enforce this.
-> >
-> 
-> This is probably a good idea.

Finally, he says it ;).

-> > Are you *sure* you are using PyInterpreterState_New() and not just
-> > creating new threads?
-> >
-> Yes.

We're using Py_NewInterpreter().  I don't know how much Brent has said
(I'm not on the python-dev mailing list, something I intend to remedy)
but we have two basic types of environment: new interpreter and reused
interpreter.

Everything starts off as a new interpreter, created using Py_NewInterpreter().
At the end of a Web request, a decision is made about "cleaning up" the
interpreter for re-use, vs. destroying it.

Interpreters are cleaned for reuse roughly as follows (using really ugly
C pseudo-code with error checking removed):

---

PyThreadState_Clear(thread_state);
PyDict_Clear(main_module_dict);

// Add builtin module

bimod = PyImport_ImportModule("__builtin__");
PyDict_SetItemString(maindict, "__builtins__", bimod);

---

Some time ago, I decided not to use PyInterpreterState_New() because it
seemed unnecessary; Py_NewInterpreter() did everything we wanted and nothing
more.  Looking at the code for 1.5.2, Py_NewInterpreter():

1) creates a new interpreter state;
2) creates the first thread state for that interpreter;
3) imports builtin and sys, and sys.modules modules;
4) sets the path;
5) initializes main, as we do above in the reuse part;
6) (optionally) does site initialization.

Since I think we want to do all of that, I don't see any problems.  It seems
like the sys.argv stuff is a problem with PyWX, not with Python inherently.

cheers,
--titus



From skip at mojam.com  Wed Sep 13 19:48:10 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 13 Sep 2000 12:48:10 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <Pine.WNT.4.21.0009130921340.1496-100000@loom>
References: <200009130120.NAA20286@s454.cosc.canterbury.ac.nz>
	<Pine.WNT.4.21.0009130921340.1496-100000@loom>
Message-ID: <14783.48602.639962.38233@beluga.mojam.com>

    David> Thus, FWIW, I'm -1 on the >>None construct.  I'll have a hard
    David> time teaching it, and I'll recommend against using it (unless and
    David> until convinced otherwise, of course).

I've only been following this thread with a few spare neurons.  Even so, I
really don't understand what all the fuss is about.  From the discussions
I've read on this subject, I'm confident the string "print >>None" will
never appear in an actual program.  Instead, it will be used the way Guido
envisioned:

    def write(arg, file=None):
	print >>file, arg

It will never be used in interactive sessions.  You'd just type "print arg"
or "print >>file, arg".  Programmers will never use the name "None" when
putting prints in their code.  They will write "print >>file" where file can
happen to take on the value None.  I doubt new users will even notice it, so
don't bother mentioning it when teaching about the print statement.

I'm sure David teaches people how to use classes without ever mentioning
that they can fiddle a class's __bases__ attribute.  That feature seems much
more subtle and a whole lot more dangerous than "print >> None", yet I hear
no complaints about it.

The __bases__ example occurred to me because I had occasion to use it for
the first time a few days ago.  I don't even know how long the language has
supported it (obviously at least since 1.5.2).  Worked like a charm.
Without it, I would have been stuck making a bunch of subclasses of
cgi.FormContentDict, all because I wanted each of the subclasses I used to
have a __delitem__ method.  What was an "Aha!" followed by about thirty
seconds of typing would have been a whole mess of fiddling without
modifiable __bases__ attributes.  Would I expect the readers of this list to
understand what I did?  In a flash.  Would I mention it to brand new Python
programmers?  Highly unlikely.

It's great to make sure Python is approachable for new users.  I believe we
need to also continue improve Python's power for more advanced users.  That
doesn't mean turning it into Perl, but it does occasionally mean adding
features to the language that new users won't need in their first class
assignment.

+1 from me.  If Guido likes it, that's cool.

Skip




From gward at python.net  Thu Sep 14 04:53:51 2000
From: gward at python.net (Greg Ward)
Date: Wed, 13 Sep 2000 22:53:51 -0400
Subject: [Python-Dev] Re: packaging Tkinter separately from core Python
In-Reply-To: <200009131247.HAA03938@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Sep 13, 2000 at 07:47:46AM -0500
References: <14782.59951.901752.674039@bitdiddle.concentric.net> <200009131247.HAA03938@cj20424-a.reston1.va.home.com>
Message-ID: <20000913225351.A862@beelzebub>

On 13 September 2000, Guido van Rossum said:
> Hm.  Would it be easier to have Tkinter.py and friends be part of the
> core distribution, and place only _tkinter and Tcl/Tk in the Tkinter
> RPM?

That seems unnecessarily complex.

> If that's not good, I would recommend installing as a subdir of
> site-packages, with a .pth file pointing to that subdir, e.g.:

And that seems nice.  ;-)

Much easier to get the Distutils to install a .pth file than to do evil
trickery to make it install into, eg., the standard library: just use
the 'extra_path' option.  Eg. in the NumPy setup script
(distutils/examples/numpy_setup.py):

    extra_path = 'Numeric'

means put everything into a directory "Numeric" and create
"Numeric.pth".  If you want different names, you have to make
'extra_path' a tuple:

    extra_path = ('tkinter', 'tkinter-lib')

should get your example setup:

>   site-packages/
>               tkinter.pth		".../site-packages/tkinter-lib"
> 		tkinter-lib/
> 			    _tkinter.so
> 			    Tkinter.py
> 			    Tkconstants.py
> 			    ...etc...

But it's been a while since this stuff was tested.

BTW, is there any good reason to call that directory "tkinter-lib"
instead of "tkinter"?  Is that the preferred convention for directories-
full-of-modules that are not packages?

        Greg
-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From martin at loewis.home.cs.tu-berlin.de  Thu Sep 14 08:53:56 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 14 Sep 2000 08:53:56 +0200
Subject: [Python-Dev] Integer Overflow
Message-ID: <200009140653.IAA01702@loewis.home.cs.tu-berlin.de>

With the current CVS, I get surprising results

Python 2.0b1 (#47, Sep 14 2000, 08:51:18) 
[GCC 2.95.2 19991024 (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> 1*1
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
OverflowError: integer multiplication

What is causing this exception?

Curious,
Martin



From tim_one at email.msn.com  Thu Sep 14 09:04:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:04:27 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009121411.QAA30848@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>

[Tim]
> sometimes-you-just-gotta-trust-your-bdfl-ly y'rs  - tim

[Vladimir Marangozov]
> ...
> I would have preferred arguments. The PEP and your responses lack them
> which is another sign about this feature.

I'll suggest as an alternative that we have an enormous amount of work to
complete for the 2.0 release, and continuing to argue about this isn't
perceived as a reasonable use of limited time.

I've tried it; I like it; anything I say beyond that would just be jerkoff
rationalizing of the conclusion I'm *condemned* to support by my own
pleasant experience with it.  Same with Guido.

We went over it again at a PythonLabs mtg today, and compared to the other
20 things on our agenda, when it popped up we all agreed "eh" after about a
minute.  It has supporters and detractors, the arguments are getting all of
more elaborate, extreme and repetitive with each iteration, and positions
are clearly frozen already.  That's what a BDFL is for.  He's seen all the
arguments; they haven't changed his mind; and, sorry, but it's a tempest in
a teapot regardless.

how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly
    y'rs  - tim





From tim_one at email.msn.com  Thu Sep 14 09:14:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:14:14 -0400
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <200009140653.IAA01702@loewis.home.cs.tu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>

Works for me (Windows).  Local corruption?  Compiler optimization error?
Config screwup?  Clobber everything and rebuild.  If still a problem, turn
off optimization and try again.  If still a problem, write up what you know
and enter SourceForge bug, marking it platform-specific.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Martin v. Loewis
> Sent: Thursday, September 14, 2000 2:54 AM
> To: python-dev at python.org
> Subject: [Python-Dev] Integer Overflow
>
>
> With the current CVS, I get surprising results
>
> Python 2.0b1 (#47, Sep 14 2000, 08:51:18)
> [GCC 2.95.2 19991024 (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> 1*1
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> OverflowError: integer multiplication
>
> What is causing this exception?
>
> Curious,
> Martin
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev





From martin at loewis.home.cs.tu-berlin.de  Thu Sep 14 09:32:26 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 14 Sep 2000 09:32:26 +0200
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEPFHFAA.tim_one@email.msn.com>
Message-ID: <200009140732.JAA02739@loewis.home.cs.tu-berlin.de>

> Works for me (Windows).  Local corruption?  Compiler optimization error?
> Config screwup?

Config screwup. I simultaneously try glibc betas, and 2.1.93 manages
to define LONG_BIT as 64 (due to testing whether INT_MAX is 2147483647
at a time when INT_MAX is not yet defined). Shifting by LONG_BIT/2 is
then a no-op, so ah=a, bh=b in int_mul. gcc did warn about this, but I
ignored/forgot about the warning.

I reported that to the glibc people, and worked-around it locally.

Sorry for the confusion,

Martin



From tim_one at email.msn.com  Thu Sep 14 09:44:37 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 03:44:37 -0400
Subject: [Python-Dev] Integer Overflow
In-Reply-To: <200009140732.JAA02739@loewis.home.cs.tu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEPHHFAA.tim_one@email.msn.com>

Glad you found it!  Note that the result of shifting a 32-bit integer *by*
32 isn't defined in C (gotta love it ...), so "no-op" was lucky.

> -----Original Message-----
> From: Martin v. Loewis [mailto:martin at loewis.home.cs.tu-berlin.de]
> Sent: Thursday, September 14, 2000 3:32 AM
> To: tim_one at email.msn.com
> Cc: python-dev at python.org
> Subject: Re: [Python-Dev] Integer Overflow
>
>
> > Works for me (Windows).  Local corruption?  Compiler optimization error?
> > Config screwup?
>
> Config screwup. I simultaneously try glibc betas, and 2.1.93 manages
> to define LONG_BIT as 64 (due to testing whether INT_MAX is 2147483647
> at a time when INT_MAX is not yet defined). Shifting by LONG_BIT/2 is
> then a no-op, so ah=a, bh=b in int_mul. gcc did warn about this, but I
> ignored/forgot about the warning.
>
> I reported that to the glibc people, and worked-around it locally.
>
> Sorry for the confusion,
>
> Martin





From Vladimir.Marangozov at inrialpes.fr  Thu Sep 14 11:40:37 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 14 Sep 2000 11:40:37 +0200 (CEST)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> from "Tim Peters" at Sep 14, 2000 03:04:27 AM
Message-ID: <200009140940.LAA02556@python.inrialpes.fr>

Tim Peters wrote:
> 
> I'll suggest as an alternative that we have an enormous amount of work to
> complete for the 2.0 release, and continuing to argue about this isn't
> perceived as a reasonable use of limited time.

Fair enough, but I had no choice: this feature was imposed without prior
discussion and I saw it too late to take a stance. I've done my job.

> 
> I've tried it; I like it; anything I say beyond that would just be jerkoff
> rationalizing of the conclusion I'm *condemned* to support by my own
> pleasant experience with it.  Same with Guido.

Nobody is condemned when receptive. You're inflexibly persistent here.

Remove the feature, discuss it, try providing arguments so that we can
agree (or disagree), write the PEP including a summary of the discussion,
then decide and add the feature.

In this particular case, I find Guido's attitude regarding the "rules of
the game" (that you have fixed, btw, PEPs included) quite unpleasant.

I speak for myself. Guido has invited me here so that I could share
my opinions and experience easily and that's what I'm doing in my spare
cycles (no, your agenda is not mine so I won't look at the bug list).
If you think I'm doing more harm than good, no problem. I'd be happy
to decline his invitation and quit.

I'll be even more explit:

There are organizational bugs in the functioning of this micro-society
that would need to be fixed first, IMHO. Other signs about this have
been expressed in the past too. Nobody commented. Silence can't rule
forever. Note that I'm not writing arguments for my own pleasure or to
scratch my nose. My time is precious enough, just like yours.

> 
> We went over it again at a PythonLabs mtg today, and compared to the other
> 20 things on our agenda, when it popped up we all agreed "eh" after about a
> minute.  It has supporters and detractors, the arguments are getting all of
> more elaborate, extreme and repetitive with each iteration, and positions
> are clearly frozen already.  That's what a BDFL is for.  He's seen all the
> arguments; they haven't changed his mind; and, sorry, but it's a tempest in
> a teapot regardless.

Nevermind.

Open your eyes, though.

pre-release-pressure-can-do-more-harm-than-it-should'ly ly
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at mems-exchange.org  Thu Sep 14 15:03:28 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 14 Sep 2000 09:03:28 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Sep 11, 2000 at 09:27:10PM -0400
References: <200009112322.BAA29633@python.inrialpes.fr> <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com>
Message-ID: <20000914090328.A31011@ludwig.cnri.reston.va.us>

On 11 September 2000, Tim Peters said:
> > So as long as one uses extended print, she's already an advanced user.
> 
> Nope!  "Now how did I get this to print to a file instead?" is one of the
> faqiest of newbie FAQs on c.l.py, and the answers they've been given in the
> past were sheer torture for them ("sys?  what's that?  rebind sys.stdout to
> a file-like object?  what?! etc").

But that's only an argument for "print >>file"; it doesn't support
"print >>None" == "print >>sys.stdout" == "print" at all.

The only possible rationale I can see for that equivalence is in a
function that wraps print; it lets you get away with this:

    def my_print (string, file=None):
        print >> file, string

instead of this:

    def my_print (string, file=None):
        if file is None: file = sys.stdout
        print >> file, string

...which is *not* sufficient justification for the tortured syntax *and*
bizarre semantics.  I can live with the tortured ">>" syntax, but
coupled with the bizarre "None == sys.stdout" semantics, this is too
much.

Hmmm.  Reviewing my post, I think someone needs to decide what the
coding standard for ">>" is: "print >>file" or "print >> file"?  ;-)

        Greg



From gward at mems-exchange.org  Thu Sep 14 15:13:27 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 14 Sep 2000 09:13:27 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <20000914090328.A31011@ludwig.cnri.reston.va.us>; from gward@ludwig.cnri.reston.va.us on Thu, Sep 14, 2000 at 09:03:28AM -0400
References: <200009112322.BAA29633@python.inrialpes.fr> <LNBBLJKPBEHFEDALKOLCIEHLHFAA.tim_one@email.msn.com> <20000914090328.A31011@ludwig.cnri.reston.va.us>
Message-ID: <20000914091326.B31011@ludwig.cnri.reston.va.us>

Oops.  Forgot to cast my votes:

+1 on redirectable print
-0 on the particular syntax chosen (not that it matters now)
-1 on None == sys.stdout (yes, I know it's more subtle than that,
      but that's just what it looks like)

IMHO "print >>None" should have the same effect as "print >>37" or
"print >>'foo'":

  ValueError: attempt to print to a non-file object

(as opposed to "print to file descriptor 37" and "open a file called
'foo' in append mode and write to it", of course.  ;-)

        Greg



From peter at schneider-kamp.de  Thu Sep 14 15:07:19 2000
From: peter at schneider-kamp.de (Peter Schneider-Kamp)
Date: Thu, 14 Sep 2000 15:07:19 +0200
Subject: [Python-Dev] Re: timeouts  (Was: checking an ip)
References: <SOLv5.8548$l6.467825@zwoll1.home.nl> <39BF9585.FC4C9CB1@schneider-kamp.de> <8po6ei$893$1@sunnews.cern.ch> <013601c01e1f$2f8dde60$978647c1@DEVELOPMENT>
Message-ID: <39C0CD87.396302EC@schneider-kamp.de>

I have proposed the inclusion of Timothy O'Malley's timeoutsocket.py
into the standard socket module on python-dev, but there has not been
a single reply in four weeks.

http://www.python.org/pipermail/python-dev/2000-August/015111.html

I think there are four possibilities:
1) add a timeoutsocket class to Lib/timeoutsocket.py
2) add a timeoutsocket class to Lib/socket.py
3) replace the socket class in Lib/socket.py
4) wait until the interval is down to one day

feedback-hungri-ly y'rs
Peter

Ulf Engstr?m schrieb:
> 
> I'm thinking this is something that should be put in the distro, since it
> seems a lot of people are asking for it all the time. I'm using select, but
> it'd be even better to have a proper timeout on all the socket stuff. Not to
> mention timeout on input and raw_input. (using select on those are platform
> dependant). Anyone has a solution to that?
> Are there any plans to put in timeouts? Can there be? :)
> Regards
> Ulf
> 
> > sigh...
> > and to be more precise, look at yesterday's post labelled
> > nntplib timeout bug?
> > interval between posts asking about timeout for sockets is already
> > down to 2 days.. great :-)
> 
> --
> http://www.python.org/mailman/listinfo/python-list



From garabik at atlas13.dnp.fmph.uniba.sk  Thu Sep 14 16:58:35 2000
From: garabik at atlas13.dnp.fmph.uniba.sk (Radovan Garabik)
Date: Thu, 14 Sep 2000 18:58:35 +0400
Subject: [Python-Dev] Re: [Fwd: Re: timeouts  (Was: checking an ip)]
In-Reply-To: <39C0D268.61F35DE8@schneider-kamp.de>; from peter@schneider-kamp.de on Thu, Sep 14, 2000 at 03:28:08PM +0200
References: <39C0D268.61F35DE8@schneider-kamp.de>
Message-ID: <20000914185835.A4080@melkor.dnp.fmph.uniba.sk>

On Thu, Sep 14, 2000 at 03:28:08PM +0200, Peter Schneider-Kamp wrote:
> 
> I have proposed the inclusion of Timothy O'Malley's timeoutsocket.py
> into the standard socket module on python-dev, but there has not been
> a single reply in four weeks.
> 
> http://www.python.org/pipermail/python-dev/2000-August/015111.html
> 
> I think there are four possibilities:
> 1) add a timeoutsocket class to Lib/timeoutsocket.py

why not, it won't break anything
but timeoutsocket.py needs a bit of "polishing" in this case
and some testing... I had some strange errors on WinNT
with timeout_socket (everything worked flawlessly on linux),
but unfortunately I am now away from that (or any other Winnt) computer 
and cannot do any tests.

> 2) add a timeoutsocket class to Lib/socket.py

possible

> 3) replace the socket class in Lib/socket.py

this could break some applications... especially
if you play with changing blocking/nonblocking status of socket
in them

> 4) wait until the interval is down to one day

5) add timeouts at the C level to socketmodule

this would be probably the right solution, but 
rather difficult to write.


and, of course, both timeout_socket and timeoutsocket
should be looked at rather closely. (I dismantled 
timeout_socket when I was hunting bugs in it, but have not
done it with timeoutsocket)


-- 
 -----------------------------------------------------------
| Radovan Garabik http://melkor.dnp.fmph.uniba.sk/~garabik/ |
| __..--^^^--..__    garabik @ melkor.dnp.fmph.uniba.sk     |
 -----------------------------------------------------------
Antivirus alert: file .signature infected by signature virus.
Hi! I'm a signature virus! Copy me into your signature file to help me spread!



From skip at mojam.com  Thu Sep 14 17:17:03 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 14 Sep 2000 10:17:03 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
References: <200009121411.QAA30848@python.inrialpes.fr>
	<LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
Message-ID: <14784.60399.893481.717232@beluga.mojam.com>

    Tim> how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly

I find the way python-bugs is working these days extremely bizarre.  Is it
resending a bug when there's some sort of change?  A few I've examined were
originally submitted in 1999.  Are they just now filtering out of jitterbug
or have they had some comment added that I don't see?

Skip




From paul at prescod.net  Thu Sep 14 17:28:14 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 14 Sep 2000 08:28:14 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
Message-ID: <39C0EE8E.770CAA17@prescod.net>

Tim Peters wrote:
> 
>...
> 
> We went over it again at a PythonLabs mtg today, and compared to the other
> 20 things on our agenda, when it popped up we all agreed "eh" after about a
> minute.  It has supporters and detractors, the arguments are getting all of
> more elaborate, extreme and repetitive with each iteration, and positions
> are clearly frozen already.  That's what a BDFL is for.  He's seen all the
> arguments; they haven't changed his mind; and, sorry, but it's a tempest in
> a teapot regardless.

All of the little hacks and special cases add up.

In the face of all of this confusion the safest thing would be to make
print >> None illegal and then figure it out for Python 2.1. 

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From jeremy at beopen.com  Thu Sep 14 17:38:56 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 11:38:56 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <14784.60399.893481.717232@beluga.mojam.com>
References: <200009121411.QAA30848@python.inrialpes.fr>
	<LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
	<14784.60399.893481.717232@beluga.mojam.com>
Message-ID: <14784.61712.512770.129447@bitdiddle.concentric.net>

>>>>> "SM" == Skip Montanaro <skip at mojam.com> writes:

  Tim> how-about-everyone-pitch-in-to-help-clear-the-bug-backlog-instead?-ly

  SM> I find the way python-bugs is working these days extremely
  SM> bizarre.  Is it resending a bug when there's some sort of
  SM> change?  A few I've examined were originally submitted in 1999.
  SM> Are they just now filtering out of jitterbug or have they had
  SM> some comment added that I don't see?

Yes.  SF resends the entire bug report for every change to the bug.
If you change the priority for 5 to 4 or do anything else, it sends
mail.  It seems like too much mail to me, but better than no mail at
all.

Also note that the bugs list gets a copy of everything.  The submittor
and current assignee for each bug also get an email.

Jeremy



From jeremy at beopen.com  Thu Sep 14 17:48:50 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 11:48:50 -0400 (EDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009140940.LAA02556@python.inrialpes.fr>
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>
	<200009140940.LAA02556@python.inrialpes.fr>
Message-ID: <14784.62306.209688.587211@bitdiddle.concentric.net>

>>>>> "VM" == Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr> writes:

  VM> Remove the feature, discuss it, try providing arguments so that
  VM> we can agree (or disagree), write the PEP including a summary of
  VM> the discussion, then decide and add the feature.

The last step in the PEP process is for Guido to accept or reject a
PEP.  Since he is one of the primary advocates of the print >>None
behavior, I don't see why we should do what you suggest.  Presumably
Guido will continue to want the feature.

  VM> In this particular case, I find Guido's attitude regarding the
  VM> "rules of the game" (that you have fixed, btw, PEPs included)
  VM> quite unpleasant.

What is Guido's attitude?  What are the "rules of the game"?

  VM> I speak for myself. Guido has invited me here so that I could
  VM> share my opinions and experience easily and that's what I'm
  VM> doing in my spare cycles (no, your agenda is not mine so I won't
  VM> look at the bug list).  If you think I'm doing more harm than
  VM> good, no problem. I'd be happy to decline his invitation and
  VM> quit.

You're a valued member of this community.  We welcome your opinions
and experience.  It appears that in this case, Guido's opinions and
experience lead to a different conclusion that yours.  I am not
thrilled with the print >> None behavior myself, but I do not see the
value of pursuing the issue at length.

  VM> I'll be even more explit:

  VM> There are organizational bugs in the functioning of this
  VM> micro-society that would need to be fixed first, IMHO. Other
  VM> signs about this have been expressed in the past too. Nobody
  VM> commented. Silence can't rule forever. Note that I'm not writing
  VM> arguments for my own pleasure or to scratch my nose. My time is
  VM> precious enough, just like yours.

If I did not comment on early signs of organizational bugs, it was
probably because I did not see them.  We did a lot of hand-wringing
several months ago about the severage backlog in reviewing patches and
bugs.  We're making good progress on both the backlogs.  We also
formalized the design process for major language features.  Our
execution of that process hasn't been flawless, witness the features
in 2.0b1 that are still waiting for their PEPs to be written, but the
PEP process was instituted late in the 2.0 release process.

Jeremy



From effbot at telia.com  Thu Sep 14 18:05:05 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 18:05:05 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net>
Message-ID: <00c201c01e65$8d327bc0$766940d5@hagrid>

Paul wrote:
> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1.

Really?  So what's the next feature we'll have to take out after
some other python-dev member threatens to leave if he cannot
successfully force his ideas onto Guido and everyone else?

</F>

    "I'm really not a very nice person. I can say 'I don't care' with
    a straight face, and really mean it."
    -- Linus Torvalds, on why the B in BDFL really means "bastard"




From paul at prescod.net  Thu Sep 14 18:16:12 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 14 Sep 2000 09:16:12 -0700
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid>
Message-ID: <39C0F9CC.C9ECC35E@prescod.net>

Fredrik Lundh wrote:
> 
> Paul wrote:
> > In the face of all of this confusion the safest thing would be to make
> > print >> None illegal and then figure it out for Python 2.1.
> 
> Really?  So what's the next feature we'll have to take out after
> some other python-dev member threatens to leave if he cannot
> successfully force his ideas onto Guido and everyone else?

There have been several participants, all long-time Python users, who
have said that this None thing is weird. Greg Ward, who even likes
*Perl* said it is weird.

By my estimation there are more voices against then for and those that
are for are typically lukewarm ("I hated it at first but don't hate it
as much anymore"). Therefore I don't see any point in acting as if this
is single man's crusade.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From akuchlin at mems-exchange.org  Thu Sep 14 18:32:57 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 14 Sep 2000 12:32:57 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39C0F9CC.C9ECC35E@prescod.net>; from paul@prescod.net on Thu, Sep 14, 2000 at 09:16:12AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid> <39C0F9CC.C9ECC35E@prescod.net>
Message-ID: <20000914123257.C31741@kronos.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 09:16:12AM -0700, Paul Prescod wrote:
>By my estimation there are more voices against then for and those that
>are for are typically lukewarm ("I hated it at first but don't hate it
>as much anymore"). Therefore I don't see any point in acting as if this
>is single man's crusade.

Indeed.  On the other hand, this issue is minor enough that it's not
worth walking away from the community over; walk away if you no longer
use Python, or if it's not fun any more, or if the tenor of the
community changes.  Not because of one particular bad feature; GvR's
added bad features before, but we've survived.  

(I should be thankful, really, since the >>None feature means more
material for my Python warts page.)

--amk




From effbot at telia.com  Thu Sep 14 19:07:58 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 19:07:58 +0200
Subject: [Python-Dev] Challenge about print >> None
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <00c201c01e65$8d327bc0$766940d5@hagrid> <39C0F9CC.C9ECC35E@prescod.net>
Message-ID: <003a01c01e6e$56aa2180$766940d5@hagrid>

paul wrote:
> Therefore I don't see any point in acting as if this is single man's crusade.

really?  who else thinks that this little feature "shows that the rules
are fixed" and "my time is too precious to work on bug fixes" and "we're
here to vote, not to work" and "since my veto doesn't count, there are
organizational bugs". 

can we have a new mailing list, please?  one that's only dealing with
cool code, bug fixes, release administratrivia, etc.  practical stuff, not
ego problems.

</F>




From loewis at informatik.hu-berlin.de  Thu Sep 14 19:28:54 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Thu, 14 Sep 2000 19:28:54 +0200 (MET DST)
Subject: [Python-Dev] Re: [Python-Help] Bug in PyTuple_Resize
In-Reply-To: <200009141413.KAA21765@enkidu.stsci.edu> (delapena@stsci.edu)
References: <200009141413.KAA21765@enkidu.stsci.edu>
Message-ID: <200009141728.TAA04901@pandora.informatik.hu-berlin.de>

> Thank you for the response.  Unfortunately, I do not have the know-how at
> this time to solve this problem!  I did submit my original query and
> your response to the sourceforge bug tracking mechanism this morning.

I spent some time with this bug, and found that it is in some
unrelated code: the tuple resizing mechanism is is buggy if cyclic gc
is enabled. A patch is included below. [and in SF patch 101509]

It just happens that this code is rarely used: in _tkinter, when
filtering tuples, and when converting sequences to tuples. And even
then, the bug triggers on most systems only for _tkinter: the tuple
gets smaller in filter, so realloc(3C) returns the same adress;
tuple() normally succeeds in knowing the size in advance, so no resize
is necessary.

Regards,
Martin

Index: tupleobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/tupleobject.c,v
retrieving revision 2.44
diff -u -r2.44 tupleobject.c
--- tupleobject.c	2000/09/01 23:29:27	2.44
+++ tupleobject.c	2000/09/14 17:12:07
@@ -510,7 +510,7 @@
 		if (g == NULL) {
 			sv = NULL;
 		} else {
-			sv = (PyTupleObject *)PyObject_FROM_GC(g);
+			sv = (PyTupleObject *)PyObject_FROM_GC(sv);
 		}
 #else
 		sv = (PyTupleObject *)



From Vladimir.Marangozov at inrialpes.fr  Thu Sep 14 23:34:24 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 14 Sep 2000 16:34:24 -0500
Subject: [Python-Dev] See you later, folks!
Message-ID: <200009142134.QAA07143@cj20424-a.reston1.va.home.com>

[Vladimir asked me to post this due to python-dev mailing lists, and
to subsequently turn off his subscriptions.  Come back soon, Vladimir!
--Guido]

The time has come for me to leave you for some time. But rest assured,
not for the reasons you suspect <wink>. I'm in the process of changing
jobs & country. Big changes, that is.

So indeed, I'll unsubscribe from the python-dev list for a while and
indeed, I won't look at the bug list because I won't be able to, not
because I don't want to. (I won't be able to handle more patches for
that matter, sorry!)

Regarding the latest debate about extended print, things are surely
not so extreme as they sounded to Fredrik! So take it easy. I still
can sign with both hands what I've said, though, although you must
know that whenever I engage in the second round of a debate, I have
reasons to do so and my writing style becomes more pathetic, indeed.
But remember that python-dev is a place where educated opinions are being
confronted. The "bug" I referred to is that Guido, as the principal
proponent of a feature has not entered the second round of this debate
to defend it, despite the challenge I have formulated and subsequently
argued (I understand that he might have felt strange after reading my
posts). I apologize for my style if you feel that I should. I would
quit python-dev in the sense that if there are no more debates, I am
little to no interested in participating. That's what happens when,
for instance, Guido exercises his power prematurely which is not a
good thing, overall.

In short, I suddenly felt like I had to clarify this situation, secretly
knowing that Guido & Tim and everybody else (except Fredrik, but I
forgive him <wink>) understands the many points I've raised. This
debate would be my latest "contribution" for some time.

Last but not least, I must say that I deeply respect Guido & Tim and
everybody else (including Fredrik <wink>) for their knowledge and
positive attitude!  (Tim, I respect your fat ass too <wink> -- he does
a wonderful job on c.l.py!)

See you later!

knowledge-cannot-shrink!-it-can-only-extended-and-so-should-be-print'ly
truly-None-forbidding'ly y'rs
--
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at beopen.com  Fri Sep 15 00:15:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 17:15:49 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Thu, 14 Sep 2000 08:28:14 MST."
             <39C0EE8E.770CAA17@prescod.net> 
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>  
            <39C0EE8E.770CAA17@prescod.net> 
Message-ID: <200009142215.RAA07332@cj20424-a.reston1.va.home.com>

> All of the little hacks and special cases add up.
> 
> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1. 

Sorry, no deal.  print>>file and print>>None are here to stay.

Paul, I don't see why you keep whining about this.  Sure, it's the
feature that everybody loves to hate.  But what's the big deal?  Get
over it.  I don't believe for a second that there is a trend that I've
stopped listening.  To the contrary, I've spent a great deal of time
reading to the arguments against this feature and its refinement, and
I simply fail to be convinced by the counter-arguments.

If this had been in the language from day one nobody would have
challenged it.  (And I've used my time machine to prove it, so don't
argue. :-)

If you believe I should no longer be the BDFL, say so, but please keep
it out of python-dev.  We're trying to get work done here.  You're an
employee of a valued member of the Python Consortium.  As such you can
request (through your boss) to be subscribed to the Consortium mailing
list.  Feel free to bring this up there -- there's not much else going
on there.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Thu Sep 14 23:28:33 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 17:28:33 -0400 (EDT)
Subject: [Python-Dev] Revised release schedule
Message-ID: <14785.17153.995000.379187@bitdiddle.concentric.net>

I just updated PEP 200 with some new details about the release
schedule.  These details are still open to some debate, but they need
to be resolved quickly.

I propose that we release 2.0 beta 2 on 26 Sep 2000.  That's one week
from this coming Tuesday.  This would be the final beta.  The final
release would be two weeks after that on 10 Oct 2000.

The feature freeze we imposed before the first beta is still in effect
(more or less).  We should only be adding new features when they fix
crucial bugs.  In order to allow time to prepare the release, all
changes should be made by the end of the day on Sunday, 24 Sep.

There is still a lot of work that remains to resolve open patches and
fix as many bugs as possible.  I have re-opened a number of patches
that were postponed prior to the 2.0b1 release.  It is not clear that
all of these patches should be accepted, but some of them may be
appropriate for inclusion now.  

There is also a large backlog of old bugs and a number of new bugs
from 2.0b1.  Obviously, we need to get these new bugs resolved and
make a dent in the old bugs.  I'll send a note later today with some
guidelines for bug triage.

Jeremy



From guido at beopen.com  Fri Sep 15 00:25:37 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 17:25:37 -0500
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 08:28:14 MST."
             <39C0EE8E.770CAA17@prescod.net> 
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com>  
            <39C0EE8E.770CAA17@prescod.net> 
Message-ID: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>

> In the face of all of this confusion the safest thing would be to make
> [...] illegal and then figure it out for Python 2.1. 

Taking out controversial features is a good idea in some cases, in
order to prevent likely disasters.

I've heard that the xml support in 2.0b1 is broken, and that it's not
clear that it will be possible to fix it in time (the 2.0b1 release is
due in two weeks).  The best thing here seems to remove it and put it
back in 2.1 (due 3-6 months after 2.0).  In the mean time, the XML-sig
can release its own version.

The way I understand the situation right now is that there are two
packages claiming the name xml; one in the 2.0 core and one released
by the XML-sig.  While the original intent was for the XML-sig package
to be a superset of the core package, this doesn't appear to be
currently the case, even if the brokenness of the core xml package can
be fixed.

We absolutely cannot have a situation where there could be two
applications, one working only with the xml-sig's xml package, and the
other only with the 2.0 core xml package.  If at least one direction
of compatibility cannot be guaranteed, I propose that one of the
packages be renamed.  We can either rename the xml package to be
released with Python 2.0 to xmlcore, or we can rename the xml-sig's
xml package to xmlsig (or whatever they like).  (Then when in 2.1 the
issue is resolved, we can rename the compatible solution back to xml.)

Given that the xml-sig already has released packages called xml, the
best solution (and one which doesn't require the cooperation of the
xml-sig!) is to rename the 2.0 core xml package to xmlcore.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Thu Sep 14 23:28:22 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 17:28:22 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009140940.LAA02556@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEBCHGAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> Nobody is condemned when receptive. You're inflexibly persistent here.

I'm terse due to lack of both time for, and interest in, this issue.  I'm
persistent because Guido already ruled on this, has explicitly declined to
change his mind, and that's the way this language has always evolved.  Had
you hung around Python in the early days, there was often *no* discussion
about new features:  they just showed up by surprise.  Since that's how
lambda got in, maybe Guido started Python-Dev to oppose future mistakes like
that <wink>.

> Remove the feature, discuss it, try providing arguments so that we can
> agree (or disagree), write the PEP including a summary of the discussion,
> then decide and add the feature.

It was already very clear that that's what you want.  It should have been
equally clear that it's not what you're going to get on this one.  Take it
up with Guido if you must, but I'm out of it.

> In this particular case, I find Guido's attitude regarding the "rules of
> the game" (that you have fixed, btw, PEPs included) quite unpleasant.
>
> I speak for myself. Guido has invited me here so that I could share
> my opinions and experience easily and that's what I'm doing in my spare
> cycles (no, your agenda is not mine so I won't look at the bug list).

Then understand that my agenda is Guido's, and not only because he's my
boss.  Slashing the bug backlog *now* is something he believes is important
to Python's future, and evidently far more important to him than this
isolated little print gimmick.  It's also my recollection that he started
Python-Dev to get help on decisions that were important to him, not to
endure implacable opposition to every little thing he does.

If he debated every issue brought up on Python-Dev alone to the satisfaction
of just the people here, he would have time for nothing else.  That's the
truth.  As it is, he tells me he spends at least 2 hours every day just
*reading* Python-Dev, and I believe that, because I do too.  So long as this
is a dictatorship, I think it's impossible for people not to feel slighted
at times.  That's the way it's always been, and it's worked very well
despite that.

And I'll tell you something:  there is *nobody* in the history of Python who
has had more suggestions and "killer arguments" rejected by Guido than me.
I got over that in '93, though.  Play with him when you agree, back off when
he says "no".  That's what works.

> If you think I'm doing more harm than good, no problem. I'd be happy
> to decline his invitation and quit.

In general I think Guido believes your presence here is extremely helpful.
I know that I do.  On this particular issue, though, no, continuing to beat
on something after Guido says "case closed" isn't helpful.

> I'll be even more explit:
>
> There are organizational bugs in the functioning of this micro-society
> that would need to be fixed first, IMHO. Other signs about this have
> been expressed in the past too. Nobody commented.

People have been griping about the way Python is run since '91, so I'm not
buying the idea that this is something new.  The PEP process *is* something
new and has been of very mixed utility so far, but is particularly
handicapped at the start due to the need to record old decisions whose
*real* debates actually ended a long time ago.

I certainly agree that the way this particular gimmick got snuck in violated
"the rules", and if it were anyone other than Guido who did it I'd be
skinning them alive.  I figure he's entitled, though.  Don't you?

> Silence can't rule forever. Note that I'm not writing arguments for
> my own pleasure or to scratch my nose. My time is precious enough, just
> like yours.

Honestly, I don't know why you've taken your time to pursue this repeatedly.
Did Guido say something to suggest that he might change his mind?  I didn't
see it.

> ...
> Open your eyes, though.

I believe they're open, but that we're seeing different visions of how
Python *should* be run.

> pre-release-pressure-can-do-more-harm-than-it-should'ly ly

We've held a strict line on "bugfixes only" since 2.0b1 went out the door,
and I've indeed spent many an hour debating that with the feature-crazed
too.  The debates about all that, and all this, and the license mess, are
sucking my life away.  I still think we're doing a damned good job, though
<wink>.

over-and-out-ly y'rs  - tim





From tim_one at email.msn.com  Thu Sep 14 23:28:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 14 Sep 2000 17:28:25 -0400
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <39C0EE8E.770CAA17@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBCHGAA.tim_one@email.msn.com>

[Paul Prescod]
> All of the little hacks and special cases add up.

Yes, they add up to a wonderful language <0.9 wink>.

> In the face of all of this confusion the safest thing would be to make
> print >> None illegal and then figure it out for Python 2.1.

There's no confusion in Guido's mind, though.

Well, not on this.  I'll tell you he's *real* confused about xml, though:
we're getting reports that the 2.0b1 version of the xml package is unusably
buggy.  If *that* doesn't get fixed, xml will get tossed out of 2.0final.
Fred Drake has volunteered to see what he can do about that, but it's
unclear whether he can make enough time to pursue it.





From effbot at telia.com  Thu Sep 14 23:46:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 14 Sep 2000 23:46:11 +0200
Subject: [Python-Dev] Re: [Python-Help] Bug in PyTuple_Resize
References: <200009141413.KAA21765@enkidu.stsci.edu> <200009141728.TAA04901@pandora.informatik.hu-berlin.de>
Message-ID: <005201c01e95$3741e680$766940d5@hagrid>

martin wrote:
> I spent some time with this bug, and found that it is in some
> unrelated code: the tuple resizing mechanism is is buggy if cyclic gc
> is enabled. A patch is included below. [and in SF patch 101509]

wow, that was quick!

I've assigned the bug back to you.  go ahead and check
it in, and mark the bug as closed.

thanks /F




From akuchlin at mems-exchange.org  Thu Sep 14 23:47:19 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 14 Sep 2000 17:47:19 -0400
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 14, 2000 at 05:25:37PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com>
Message-ID: <20000914174719.A29499@kronos.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
>by the XML-sig.  While the original intent was for the XML-sig package
>to be a superset of the core package, this doesn't appear to be
>currently the case, even if the brokenness of the core xml package can
>be fixed.

I'd be more inclined to blame the XML-SIG package; the last public
release is quite elderly, and the CVS tree hasn't been updated to be a
superset of the xml/ package in the Python tree.  However, if you want
to drop the Lib/xml/ package from Python, I have no objections at all;
I never wanted it in the first place.

--amk




From effbot at telia.com  Fri Sep 15 00:16:32 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 15 Sep 2000 00:16:32 +0200
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
Message-ID: <000701c01e99$d0fac9a0$766940d5@hagrid>

http://www.upside.com/texis/mvm/story?id=39c10a5e0

</F>




From guido at beopen.com  Fri Sep 15 01:14:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 14 Sep 2000 18:14:52 -0500
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 17:47:19 -0400."
             <20000914174719.A29499@kronos.cnri.reston.va.us> 
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com>  
            <20000914174719.A29499@kronos.cnri.reston.va.us> 
Message-ID: <200009142314.SAA08092@cj20424-a.reston1.va.home.com>

> On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
> >by the XML-sig.  While the original intent was for the XML-sig package
> >to be a superset of the core package, this doesn't appear to be
> >currently the case, even if the brokenness of the core xml package can
> >be fixed.
> 
> I'd be more inclined to blame the XML-SIG package; the last public
> release is quite elderly, and the CVS tree hasn't been updated to be a
> superset of the xml/ package in the Python tree.  However, if you want
> to drop the Lib/xml/ package from Python, I have no objections at all;
> I never wanted it in the first place.

It's easy to blame.  (Aren't you responsible for the XML-SIG releases? :-)

I can't say that I wanted the xml package either -- I thought that the
XML-SIG wanted it, and insisted that it be called 'xml', conflicting
with their own offering.  I'm not part of that group, and have no time
to participate in a discussion there or read their archives.  Somebody
please get their attention -- otherwise it *will* be removed from 2.0!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Fri Sep 15 00:42:00 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 14 Sep 2000 18:42:00 -0400 (EDT)
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
In-Reply-To: <000701c01e99$d0fac9a0$766940d5@hagrid>
References: <000701c01e99$d0fac9a0$766940d5@hagrid>
Message-ID: <14785.21560.61961.86040@bitdiddle.concentric.net>

I like Python plenty, but Emacs is my favorite operating system.

Jeremy



From MarkH at ActiveState.com  Fri Sep 15 00:37:22 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 15 Sep 2000 09:37:22 +1100
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <20000914174719.A29499@kronos.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEGHDJAA.MarkH@ActiveState.com>

[Guido]
> On Thu, Sep 14, 2000 at 05:25:37PM -0500, Guido van Rossum wrote:
> >by the XML-sig.  While the original intent was for the XML-sig package
> >to be a superset of the core package, this doesn't appear to be
> >currently the case, even if the brokenness of the core xml package can
> >be fixed.

[Andrew]
> I'd be more inclined to blame the XML-SIG package;

Definately.  This XML stuff has cost me a number of hours a number of
times!  Always with other people's code, so I didnt know where to turn.

Now we find Guido saying things like:

> > the best solution (and one which doesn't require
> > the cooperation of the xml-sig!) is to rename
> > the 2.0 core xml package to xmlcore.

What is going on here?  We are forced to rename a core package, largely to
avoid the cooperation of, and avoid conflicting with, a SIG explicitly
setup to develop this core package in the first place!!!

How did this happen?  Does the XML SIG need to be shut down (while it still
can <wink>)?

> However, if you want to drop the Lib/xml/ package from
> Python, I have no objections at all; I never wanted it
> in the first place.

Agreed.  It must be dropped if it can not be fixed.  As it stands, an
application can make no assumptions about what xml works.

But IMO, the Python core has first grab at the name "xml" - if we can't get
the cooperation of the SIG, it should be their problem.  Where do we want
to be with respect to XML in a few years?  Surely not with some half-assed
"xmlcore" packge, and some extra "xml" package you still need to get
anything done...

Mark.




From prescod at prescod.net  Fri Sep 15 01:25:38 2000
From: prescod at prescod.net (Paul)
Date: Thu, 14 Sep 2000 18:25:38 -0500 (CDT)
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142225.RAA07360@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com>

On Thu, 14 Sep 2000, Guido van Rossum wrote:

> > In the face of all of this confusion the safest thing would be to make
> > [...] illegal and then figure it out for Python 2.1. 
> 
> Taking out controversial features is a good idea in some cases, in
> order to prevent likely disasters.
> 
> I've heard that the xml support in 2.0b1 is broken, and that it's not
> clear that it will be possible to fix it in time (the 2.0b1 release is
> due in two weeks).  The best thing here seems to remove it and put it
> back in 2.1 (due 3-6 months after 2.0).  In the mean time, the XML-sig
> can release its own version.

I've been productively using the 2.0 XML package. There are three main
modules in there: Expat -- which I believe is fine, SAX -- which is not
finished, and minidom -- which has a couple of very minor known bugs
relating to standards conformance.

If you are asking whether SAX can be fixed in time then the answer is "I
think so but it is out of my hands."  I contributed fixes to SAX this
morning and the remaining known issues are design issues. I'm not the
designer. If I were the designer I'd call it done, make a test suite and
go home.

Whether or not it is finished, I see no reason to hold up either minidom
or expat. There have been very few complaints about either.

> The way I understand the situation right now is that there are two
> packages claiming the name xml; one in the 2.0 core and one released
> by the XML-sig.  While the original intent was for the XML-sig package
> to be a superset of the core package, this doesn't appear to be
> currently the case, even if the brokenness of the core xml package can
> be fixed.

That's true. Martin V. Loewis has promised to look into this situation for
us. I believe he has a good understanding of the issues.

> We absolutely cannot have a situation where there could be two
> applications, one working only with the xml-sig's xml package, and the
> other only with the 2.0 core xml package.  If at least one direction
> of compatibility cannot be guaranteed, I propose that one of the
> packages be renamed.  We can either rename the xml package to be
> released with Python 2.0 to xmlcore, or we can rename the xml-sig's
> xml package to xmlsig (or whatever they like).  (Then when in 2.1 the
> issue is resolved, we can rename the compatible solution back to xml.)
> 
> Given that the xml-sig already has released packages called xml, the
> best solution (and one which doesn't require the cooperation of the
> xml-sig!) is to rename the 2.0 core xml package to xmlcore.

I think it would be unfortunate if the Python xml processing package be
named xmlcore for eternity. The whole point of putting it in the core is
that it should become more popular and ubiquitous than an add-on module.

I'd rather see Martin given an opportunity to look into it. If he hasn't
made progress in a week then we can rename one or the other.

 Paul





From prescod at prescod.net  Fri Sep 15 01:53:15 2000
From: prescod at prescod.net (Paul)
Date: Thu, 14 Sep 2000 18:53:15 -0500 (CDT)
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEGHDJAA.MarkH@ActiveState.com>
Message-ID: <Pine.LNX.4.21.0009141829330.25261-100000@amati.techno.com>

On Fri, 15 Sep 2000, Mark Hammond wrote:

> [Andrew]
> > I'd be more inclined to blame the XML-SIG package;
> 
> Definately.  This XML stuff has cost me a number of hours a number of
> times!  Always with other people's code, so I didnt know where to turn.

The XML SIG package is unstable. It's a grab bag. It's the cool stuff
people have been working on. I've said about a hundred times that it will
never get to version 1, will never be stable, will never be reliable
because that isn't how anyone views it. I don't see it as a flaw: it's the
place you go for cutting edge XML stuff. That's why Andrew and Guido are
dead wrong that we don't need Python as a package in the core. That's
where the stable stuff goes. Expat and Minidom are stable. IIRC, their
APIs have only changed in minor ways in the last year.

> What is going on here?  We are forced to rename a core package, largely to
> avoid the cooperation of, and avoid conflicting with, a SIG explicitly
> setup to develop this core package in the first place!!!
> 
> How did this happen?  Does the XML SIG need to be shut down (while it still
> can <wink>)?

It's not that anybody is not cooperating. Its that there are a small
number of people doing the actual work and they drop in and out of
availability based on their real life jobs. It isn't always, er, polite to
tell someone "get out of the way I'll do it myself." Despite the fact that
all the nasty hints are being dropped in my direction, nobody exercises a
BDFL position in the XML SIG. There's the central issue. Nobody imposes
deadlines, nobody says what features should go in or shouldn't and in what
form. If I tried to do so I would be rightfully slapped down.

> But IMO, the Python core has first grab at the name "xml" - if we can't get
> the cooperation of the SIG, it should be their problem.  Where do we want
> to be with respect to XML in a few years?  Surely not with some half-assed
> "xmlcore" packge, and some extra "xml" package you still need to get
> anything done...

It's easy to say that the core is important and the sig package is
secondary but 

 a) Guido says that they are both important
 b) The sig package has some users (at least a few) with running code

Nevertheless, I agree with you that in the long term we will wish we had
just used the name "xml" for the core package. I'm just pointing out that
it isn't as simple as it looks when you aren't involved.

 Paul Prescod




From prescod at prescod.net  Fri Sep 15 02:12:28 2000
From: prescod at prescod.net (Paul)
Date: Thu, 14 Sep 2000 19:12:28 -0500 (CDT)
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: <200009142215.RAA07332@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com>

On Thu, 14 Sep 2000, Guido van Rossum wrote:
> ...
>
> Paul, I don't see why you keep whining about this. ...
> ...
> 
> If this had been in the language from day one nobody would have
> challenged it.  (And I've used my time machine to prove it, so don't
> argue. :-)

Well I still dislike "print" and map( None, ...) but yes, the societal bar
is much higher for change than for status quo. That's how the world works.

> If you believe I should no longer be the BDFL, say so, but please keep
> it out of python-dev.  We're trying to get work done here.  You're an
> employee of a valued member of the Python Consortium.  As such you can
> request (through your boss) to be subscribed to the Consortium mailing
> list.  Feel free to bring this up there -- there's not much else going
> on there.

What message are you replying to?

According to the archives, I've sent four messages since the beginning of
September. None of them suggest you are doing a bad job as BDFL (other
than being wrong on this particular issue).

 Paul Prescod





From trentm at ActiveState.com  Fri Sep 15 02:20:45 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 14 Sep 2000 17:20:45 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src configure.in,1.156,1.157 configure,1.146,1.147 config.h.in,2.72,2.73
In-Reply-To: <200009141547.IAA14881@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Thu, Sep 14, 2000 at 08:47:10AM -0700
References: <200009141547.IAA14881@slayer.i.sourceforge.net>
Message-ID: <20000914172045.E3038@ActiveState.com>

On Thu, Sep 14, 2000 at 08:47:10AM -0700, Fred L. Drake wrote:
> Update of /cvsroot/python/python/dist/src
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv14790
> 
> Modified Files:
> 	configure.in configure config.h.in 
> Log Message:
> 
> Allow configure to detect whether ndbm.h or gdbm/ndbm.h is installed.
> This allows dbmmodule.c to use either without having to add additional
> options to the Modules/Setup file or make source changes.
> 
> (At least some Linux systems use gdbm to emulate ndbm, but only install
> the ndbm.h header as /usr/include/gdbm/ndbm.h.)
>
> Index: configure.in
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/configure.in,v
> retrieving revision 1.156
> retrieving revision 1.157
> diff -C2 -r1.156 -r1.157
> *** configure.in	2000/09/08 02:17:14	1.156
> --- configure.in	2000/09/14 15:47:04	1.157
> ***************
> *** 372,376 ****
>   sys/audioio.h sys/file.h sys/lock.h db_185.h db.h \
>   sys/param.h sys/select.h sys/socket.h sys/time.h sys/times.h \
> ! sys/un.h sys/utsname.h sys/wait.h pty.h libutil.h)
>   AC_HEADER_DIRENT
>   
> --- 372,376 ----
>   sys/audioio.h sys/file.h sys/lock.h db_185.h db.h \
>   sys/param.h sys/select.h sys/socket.h sys/time.h sys/times.h \
> ! sys/un.h sys/utsname.h sys/wait.h pty.h libutil.h ndbm.h gdbm/ndbm.h)
>   AC_HEADER_DIRENT

Is this the correct fix? Previously I had been compiling the dbmmodule on
Debain and RedHat boxes using /usr/include/db1/ndbm.h (I had to change the
Setup.in line to include this directory. Now the configure test says that
ndbm.h does not exist and this patch (see below) to dbmmodule.c now won't
compile.



> Index: dbmmodule.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Modules/dbmmodule.c,v
> retrieving revision 2.22
> retrieving revision 2.23
> diff -C2 -r2.22 -r2.23
> *** dbmmodule.c   2000/09/01 23:29:26 2.22
> --- dbmmodule.c   2000/09/14 15:48:06 2.23
> ***************
> *** 8,12 ****
> --- 8,22 ----
>   #include <sys/stat.h>
>   #include <fcntl.h>
> +
> + /* Some Linux systems install gdbm/ndbm.h, but not ndbm.h.  This supports
> +  * whichever configure was able to locate.
> +  */
> + #if defined(HAVE_NDBM_H)
>   #include <ndbm.h>
> + #elif defined(HAVE_GDBM_NDBM_H)
> + #include <gdbm/ndbm.h>
> + #else
> + #error "No ndbm.h available!"
> + #endif
>
>   typedef struct {


-- 
Trent Mick
TrentM at ActiveState.com



From akuchlin at cnri.reston.va.us  Fri Sep 15 04:05:40 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 14 Sep 2000 22:05:40 -0400
Subject: [Python-Dev] Is the 2.0 xml package too immature to release?
In-Reply-To: <200009142314.SAA08092@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Sep 14, 2000 at 06:14:52PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEPEHFAA.tim_one@email.msn.com> <39C0EE8E.770CAA17@prescod.net> <200009142225.RAA07360@cj20424-a.reston1.va.home.com> <20000914174719.A29499@kronos.cnri.reston.va.us> <200009142314.SAA08092@cj20424-a.reston1.va.home.com>
Message-ID: <20000914220540.A26196@newcnri.cnri.reston.va.us>

On Thu, Sep 14, 2000 at 06:14:52PM -0500, Guido van Rossum wrote:
>It's easy to blame.  (Aren't you responsible for the XML-SIG releases? :-)

Correct; I wouldn't presume to flagellate someone else.

>I can't say that I wanted the xml package either -- I thought that the
>XML-SIG wanted it, and insisted that it be called 'xml', conflicting
>with their own offering.  I'm not part of that group, and have no time

Most of the XML-SIG does want it; I'm just not one of them.

--amk



From petrilli at amber.org  Fri Sep 15 04:29:35 2000
From: petrilli at amber.org (Christopher Petrilli)
Date: Thu, 14 Sep 2000 22:29:35 -0400
Subject: [Python-Dev] ...as Python becomes a more popular operating system...
In-Reply-To: <14785.21560.61961.86040@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Sep 14, 2000 at 06:42:00PM -0400
References: <000701c01e99$d0fac9a0$766940d5@hagrid> <14785.21560.61961.86040@bitdiddle.concentric.net>
Message-ID: <20000914222935.A16149@trump.amber.org>

Jeremy Hylton [jeremy at beopen.com] wrote:
> I like Python plenty, but Emacs is my favorite operating system.

M-% operating system RET religion RET !

:-)
Chris
-- 
| Christopher Petrilli
| petrilli at amber.org



From moshez at math.huji.ac.il  Fri Sep 15 13:06:44 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 15 Sep 2000 14:06:44 +0300 (IDT)
Subject: [Python-Dev] Vacation
Message-ID: <Pine.GSO.4.10.10009151403560.23713-100000@sundial>

I'm going to be away from my e-mail from the 16th to the 23rd as I'm going
to be vacationing in the Netherlands. Please do not count on me to do
anything that needs to be done until the 24th. I currently have two
patches assigned to me which should be considered before b2, so if b2 is
before the 24th, please assign them to someone else.

Thanks in advance.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Fri Sep 15 14:40:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 07:40:52 -0500
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Thu, 14 Sep 2000 18:25:38 EST."
             <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> 
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> 
Message-ID: <200009151240.HAA09833@cj20424-a.reston1.va.home.com>

[me]
> > Given that the xml-sig already has released packages called xml, the
> > best solution (and one which doesn't require the cooperation of the
> > xml-sig!) is to rename the 2.0 core xml package to xmlcore.
> 
> I think it would be unfortunate if the Python xml processing package be
> named xmlcore for eternity. The whole point of putting it in the core is
> that it should become more popular and ubiquitous than an add-on module.

I'm not proposing that it be called xmlcore for eternity, but I see a
*practical* problem with the 2.0 release: the xml-sig has a package
called 'xml' (and they've had dibs on the name for years!) which is
incompatible.  We can't force them to issue a new release under a
different name.  I don't want to break other people's code that
requires the xml-sig's xml package.

I propose the following:

We remove the '_xmlplus' feature.  It seems better not to rely on the
xml-sig to provide upgrades to the core xml package.  We're planning
2.1, 2.2, ... releases 3-6 months apart which should be quick enough
for most upgrade needs; we can issue service packs in between if
necessary.

*IF* (and that's still a big "if"!) the xml core support is stable
before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
stable, we remove it, but we'll consider it for 2.1.

In 2.1, presuming the XML-sig has released its own package under a
different name, we'll rename 'xmlcore' to 'xml' (keeping 'xmlcore' as
a backwards compatibility feature until 2.2).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep 15 14:46:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 07:46:30 -0500
Subject: [Python-Dev] Challenge about print >> None
In-Reply-To: Your message of "Thu, 14 Sep 2000 19:12:28 EST."
             <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com> 
References: <Pine.LNX.4.21.0009141910140.25261-100000@amati.techno.com> 
Message-ID: <200009151246.HAA09902@cj20424-a.reston1.va.home.com>

> Well I still dislike "print" and map( None, ...) but yes, the societal bar
> is much higher for change than for status quo. That's how the world works.

Thanks.  You're getting over it just fine.  Don't worry!

> > If you believe I should no longer be the BDFL, say so, but please keep
> > it out of python-dev.  We're trying to get work done here.  You're an
> > employee of a valued member of the Python Consortium.  As such you can
> > request (through your boss) to be subscribed to the Consortium mailing
> > list.  Feel free to bring this up there -- there's not much else going
> > on there.
> 
> What message are you replying to?
> 
> According to the archives, I've sent four messages since the beginning of
> September. None of them suggest you are doing a bad job as BDFL (other
> than being wrong on this particular issue).

My apologies.  It must have been Vladimir's.  I was on the phone and
in meetings for most of the day and saw a whole slew of messages about
this issue.  Let's put this to rest -- I still have 50 more messages
to skim.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas.heller at ion-tof.com  Fri Sep 15 17:05:22 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 15 Sep 2000 17:05:22 +0200
Subject: [Python-Dev] Bug in 1.6 and 2.0b1 re?
Message-ID: <032a01c01f26$624a7900$4500a8c0@thomasnb>

[I posted this to the distutils mailing list, but have not yet
received an answer]

> This may not be directly related to distutils,
> it may also be a bug in 1.6 and 2.0b1 re implementation.
> 
> 'setup.py sdist' with the current distutils CVS version
> hangs while parsing MANIFEST.in,
> executing the re.sub command in these lines in text_file.py:
> 
>         # collapse internal whitespace (*after* joining lines!)
>         if self.collapse_ws:
>             line = re.sub (r'(\S)\s+(\S)', r'\1 \2', line)
> 
> 
> Has anyone else noticed this, or is something wrong on my side?
> 

[And a similar problem has been posted to c.l.p by vio]

> I believe there may be a RE bug in 2.0b1. Consider the following script:
> 
> #!/usr/bin/env python
> import re
> s = "red green blue"
> m = re.compile(r'green (\w+)', re.IGNORECASE)
> t = re.subn(m, r'matchedword \1 blah', s)
> print t
> 
> 
> When I run this on 1.5.2, I get the following expected output:
> 
> ('red matchedword blue blah', 1)
> 
> 
> If I run it on 2.0b1, python basically hangs.
> 

Thomas




From guido at beopen.com  Fri Sep 15 18:24:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 11:24:47 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: Your message of "Fri, 15 Sep 2000 08:14:54 MST."
             <200009151514.IAA26707@slayer.i.sourceforge.net> 
References: <200009151514.IAA26707@slayer.i.sourceforge.net> 
Message-ID: <200009151624.LAA10888@cj20424-a.reston1.va.home.com>

> --- 578,624 ----
>   
>       def load_string(self):
> !         rep = self.readline()[:-1]
> !         if not self._is_string_secure(rep):
> !             raise ValueError, "insecure string pickle"
> !         self.append(eval(rep,
>                            {'__builtins__': {}})) # Let's be careful
>       dispatch[STRING] = load_string
> + 
> +     def _is_string_secure(self, s):
> +         """Return true if s contains a string that is safe to eval
> + 
> +         The definition of secure string is based on the implementation
> +         in cPickle.  s is secure as long as it only contains a quoted
> +         string and optional trailing whitespace.
> +         """
> +         q = s[0]
> +         if q not in ("'", '"'):
> +             return 0
> +         # find the closing quote
> +         offset = 1
> +         i = None
> +         while 1:
> +             try:
> +                 i = s.index(q, offset)
> +             except ValueError:
> +                 # if there is an error the first time, there is no
> +                 # close quote
> +                 if offset == 1:
> +                     return 0
> +             if s[i-1] != '\\':
> +                 break
> +             # check to see if this one is escaped
> +             nslash = 0
> +             j = i - 1
> +             while j >= offset and s[j] == '\\':
> +                 j = j - 1
> +                 nslash = nslash + 1
> +             if nslash % 2 == 0:
> +                 break
> +             offset = i + 1
> +         for c in s[i+1:]:
> +             if ord(c) > 32:
> +                 return 0
> +         return 1
>   
>       def load_binstring(self):

Hm...  This seems to add a lot of work to a very common item in
pickles.

I had a different idea on how to make this safe from abuse: pass eval
a globals dict with an empty __builtins__ dict, as follows:
{'__builtins__': {}}.

Have you timed it?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Sep 15 18:29:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 11:29:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: Your message of "Fri, 15 Sep 2000 11:24:47 EST."
             <200009151624.LAA10888@cj20424-a.reston1.va.home.com> 
References: <200009151514.IAA26707@slayer.i.sourceforge.net>  
            <200009151624.LAA10888@cj20424-a.reston1.va.home.com> 
Message-ID: <200009151629.LAA10956@cj20424-a.reston1.va.home.com>

[I wrote]
> Hm...  This seems to add a lot of work to a very common item in
> pickles.
> 
> I had a different idea on how to make this safe from abuse: pass eval
> a globals dict with an empty __builtins__ dict, as follows:
> {'__builtins__': {}}.

I forgot that this is already how it's done.  But my point remains:
who says that this can cause security violations?  Sure, it can cause
unpickling to fail with an exception -- so can tons of other invalid
pickles.  But is it a security violation?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From trentm at ActiveState.com  Fri Sep 15 17:30:28 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 15 Sep 2000 08:30:28 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules structmodule.c,2.38,2.39
In-Reply-To: <200009150732.AAA08842@slayer.i.sourceforge.net>; from loewis@users.sourceforge.net on Fri, Sep 15, 2000 at 12:32:01AM -0700
References: <200009150732.AAA08842@slayer.i.sourceforge.net>
Message-ID: <20000915083028.D30529@ActiveState.com>

On Fri, Sep 15, 2000 at 12:32:01AM -0700, Martin v. L?wis wrote:
> Modified Files:
> 	structmodule.c 
> Log Message:
> Check range for bytes and shorts. Closes bug #110845.
> 
> 
> + 	if (x < -32768 || x > 32767){
> + 		PyErr_SetString(StructError,
> + 				"short format requires -32768<=number<=32767");
> + 		return -1;
> + 	}

Would it not be cleaner to use SHRT_MIN and SHRT_MAX (from limits.h I think)
here?

> + 	if (x < 0 || x > 65535){
> + 		PyErr_SetString(StructError,
> + 				"short format requires 0<=number<=65535");
> + 		return -1;
> + 	}
> + 	* (unsigned short *)p = (unsigned short)x;

And USHRT_MIN and USHRT_MAX here?


No biggie though.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Fri Sep 15 17:35:19 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 15 Sep 2000 08:35:19 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules structmodule.c,2.38,2.39
In-Reply-To: <20000915083028.D30529@ActiveState.com>; from trentm@ActiveState.com on Fri, Sep 15, 2000 at 08:30:28AM -0700
References: <200009150732.AAA08842@slayer.i.sourceforge.net> <20000915083028.D30529@ActiveState.com>
Message-ID: <20000915083519.E30529@ActiveState.com>

On Fri, Sep 15, 2000 at 08:30:28AM -0700, Trent Mick wrote:
> On Fri, Sep 15, 2000 at 12:32:01AM -0700, Martin v. L?wis wrote:
> > Modified Files:
> > 	structmodule.c 
> > Log Message:
> > Check range for bytes and shorts. Closes bug #110845.
> > 
> > 
> > + 	if (x < -32768 || x > 32767){
> > + 		PyErr_SetString(StructError,
> > + 				"short format requires -32768<=number<=32767");
> > + 		return -1;
> > + 	}
> 
> Would it not be cleaner to use SHRT_MIN and SHRT_MAX (from limits.h I think)
> here?
> 
> > + 	if (x < 0 || x > 65535){
> > + 		PyErr_SetString(StructError,
> > + 				"short format requires 0<=number<=65535");
> > + 		return -1;
> > + 	}
> > + 	* (unsigned short *)p = (unsigned short)x;
> 
> And USHRT_MIN and USHRT_MAX here?
> 


Heh, heh. I jump a bit quickly on that one. Three checkin messages later this
suggestion was applied. :) SOrry about that, Martin.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From paul at prescod.net  Fri Sep 15 18:02:40 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 15 Sep 2000 09:02:40 -0700
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> <200009151240.HAA09833@cj20424-a.reston1.va.home.com>
Message-ID: <39C24820.FB951E80@prescod.net>

Guido van Rossum wrote:
> 
> ...
> 
> I'm not proposing that it be called xmlcore for eternity, but I see a
> *practical* problem with the 2.0 release: the xml-sig has a package
> called 'xml' (and they've had dibs on the name for years!) which is
> incompatible.  We can't force them to issue a new release under a
> different name.  I don't want to break other people's code that
> requires the xml-sig's xml package.

Martin v. Loewis, Greg Stein and others think that they have a
backwards-compatible solution. You can decide whether to let Martin try
versus go the "xmlcore" route, or else you could delegate that decision
(to someone in particular, please!).

> I propose the following:
> 
> We remove the '_xmlplus' feature.  It seems better not to rely on the
> xml-sig to provide upgrades to the core xml package.  We're planning
> 2.1, 2.2, ... releases 3-6 months apart which should be quick enough
> for most upgrade needs; we can issue service packs in between if
> necessary.

I could live with this proposal but it isn't my decision. Are you
instructing the SIG to do this? Or are you suggesting I go back to the
SIG and start a discussion on it? What decision making procedure do you
advocate? Who is supposed to make this decision?

> *IF* (and that's still a big "if"!) the xml core support is stable
> before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
> stable, we remove it, but we'll consider it for 2.1.

We can easily have something stable within a few days from now. In fact,
all reported bugs are already fixed in patches that I will check in
today. There are no hard technical issues here.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From guido at beopen.com  Fri Sep 15 19:12:31 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 12:12:31 -0500
Subject: [Python-Dev] Re: Is the 2.0 xml package too immature to release?
In-Reply-To: Your message of "Fri, 15 Sep 2000 09:02:40 MST."
             <39C24820.FB951E80@prescod.net> 
References: <Pine.LNX.4.21.0009141806390.25261-100000@amati.techno.com> <200009151240.HAA09833@cj20424-a.reston1.va.home.com>  
            <39C24820.FB951E80@prescod.net> 
Message-ID: <200009151712.MAA13107@cj20424-a.reston1.va.home.com>

[me]
> > I'm not proposing that it be called xmlcore for eternity, but I see a
> > *practical* problem with the 2.0 release: the xml-sig has a package
> > called 'xml' (and they've had dibs on the name for years!) which is
> > incompatible.  We can't force them to issue a new release under a
> > different name.  I don't want to break other people's code that
> > requires the xml-sig's xml package.

[Paul]
> Martin v. Loewis, Greg Stein and others think that they have a
> backwards-compatible solution. You can decide whether to let Martin try
> versus go the "xmlcore" route, or else you could delegate that decision
> (to someone in particular, please!).

I will make the decision based on information gathered by Fred Drake.
You, Martin, Greg Stein and others have to get the information to him.

> > I propose the following:
> > 
> > We remove the '_xmlplus' feature.  It seems better not to rely on the
> > xml-sig to provide upgrades to the core xml package.  We're planning
> > 2.1, 2.2, ... releases 3-6 months apart which should be quick enough
> > for most upgrade needs; we can issue service packs in between if
> > necessary.
> 
> I could live with this proposal but it isn't my decision. Are you
> instructing the SIG to do this? Or are you suggesting I go back to the
> SIG and start a discussion on it? What decision making procedure do you
> advocate? Who is supposed to make this decision?

I feel that the XML-SIG isn't ready for action, so I'm making it easy
for them: they don't have to do anything.  Their package is called
'xml'.  The core package will be called something else.

> > *IF* (and that's still a big "if"!) the xml core support is stable
> > before Sept. 26, we'll keep it under the name 'xmlcore'.  If it's not
> > stable, we remove it, but we'll consider it for 2.1.
> 
> We can easily have something stable within a few days from now. In fact,
> all reported bugs are already fixed in patches that I will check in
> today. There are no hard technical issues here.

Thanks.  This is a great help!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Fri Sep 15 18:54:17 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 15 Sep 2000 12:54:17 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: <200009151624.LAA10888@cj20424-a.reston1.va.home.com>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
	<200009151624.LAA10888@cj20424-a.reston1.va.home.com>
Message-ID: <14786.21561.493632.580653@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  GvR> Hm...  This seems to add a lot of work to a very common item in
  GvR> pickles.

  GvR> I had a different idea on how to make this safe from abuse:
  GvR> pass eval a globals dict with an empty __builtins__ dict, as
  GvR> follows: {'__builtins__': {}}.

  GvR> Have you timed it?

I just timed it with a few test cases, using strings from
/dev/urandom. 

1. pickle dictionary with 25 items, 10-byte keys, 20-bytes values
   0.1% slowdown

2. pickle dictionary with 25 items, 15-byte keys, 100-byte values
   1.5% slowdown

3. pickle 8k string
   0.6% slowdown

The performance impact seems minimal.  And, of course, pickle is
already incredibly slow compared to cPickle.

So it isn't slow, but is it necessary?  I didn't give it much thought;
merely saw the cPickle did these checks in addition to calling eval
with an empty builtins dict.

Jim-- Is there a reason you added the "insecure string pickle"
feature?

I can't think of anything in particular that would go wrong other than
bizarre exceptions, e.g. OverflowError, SyntaxError, etc.  It would be
possible to construct pickles that produced unexpected objects, like
an instance with an attribute that is an integer

    >>> x
    <__main__.Foo instance at 0x8140acc>
    >>> dir(x)
    [3, 'attr']

But there are so many other ways to produce weird objects using pickle
that this particular one does not seem to matter.

The only arguments I'm left with, which don't seem particularly
compelling, are:

1. Simplifies error checking for client, which can catch ValueError
   instead of multiplicity of errors
2. Compatibility with cPickle interface

Barring better ideas from Jim Fulton, it sounds like we should
probably remove the checks from both picklers.

Jeremy



From jeremy at beopen.com  Fri Sep 15 19:04:10 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 15 Sep 2000 13:04:10 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib pickle.py,1.38,1.39
In-Reply-To: <14786.21561.493632.580653@bitdiddle.concentric.net>
References: <200009151514.IAA26707@slayer.i.sourceforge.net>
	<200009151624.LAA10888@cj20424-a.reston1.va.home.com>
	<14786.21561.493632.580653@bitdiddle.concentric.net>
Message-ID: <14786.22154.794230.895070@bitdiddle.concentric.net>

I should have checked the revision history on cPickle before the last
post.  It says:

> revision 2.16
> date: 1997/12/08 15:15:16;  author: guido;  state: Exp;  lines: +50 -24
> Jim Fulton:
> 
>         - Loading non-binary string pickles checks for insecure
>           strings. This is needed because cPickle (still)
>           uses a restricted eval to parse non-binary string pickles.
>           This change is needed to prevent untrusted
>           pickles like::
> 
>             "S'hello world'*2000000\012p0\012."
> 
>           from hosing an application.
> 

So the justification seems to be that an attacker could easily consume
a lot of memory on a system and bog down an application if eval is
used to load the strings.  I imagine there are other ways to cause
trouble, but I don't see much harm in preventing this particular one.

Trying running this with the old pickle.  It locked my system up for a
good 30 seconds :-)

x = pickle.loads("S'hello world'*20000000\012p0\012.")

Jeremy



From jeremy at beopen.com  Sat Sep 16 00:27:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: 15 Sep 2000 18:27:15 -0400
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
Message-ID: <blhf7h1ebg.fsf@bitdiddle.concentric.net>

I was just reading comp.lang.python and saw an interesting question
that I couldn't answer.  Is anyone here game?

Jeremy
------- Start of forwarded message -------
From: Donn Cave <donn at u.washington.edu>
Newsgroups: comp.lang.python
Subject: sys.setdefaultencoding (2.0b1)
Date: 12 Sep 2000 22:11:31 GMT
Organization: University of Washington
Message-ID: <8pm9mj$3ie2$1 at nntp6.u.washington.edu>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1

I see codecs.c has gone to some trouble to defer character encoding
setup until it's actually required for something, but it's required
rather early in the process anyway when site.py calls
sys.setdefaultencoding("ascii")

If I strike that line from site.py, startup time goes down by about
a third.

Is that too simple a fix?  Does setdefaultencoding("ascii") do something
important?

	Donn Cave, donn at u.washington.edu
------- End of forwarded message -------



From guido at beopen.com  Sat Sep 16 01:31:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 18:31:52 -0500
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
In-Reply-To: Your message of "15 Sep 2000 18:27:15 -0400."
             <blhf7h1ebg.fsf@bitdiddle.concentric.net> 
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> 
Message-ID: <200009152331.SAA01300@cj20424-a.reston1.va.home.com>

> I was just reading comp.lang.python and saw an interesting question
> that I couldn't answer.  Is anyone here game?


From nascheme at enme.ucalgary.ca  Sat Sep 16 00:36:14 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 15 Sep 2000 16:36:14 -0600
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
In-Reply-To: <200009152331.SAA01300@cj20424-a.reston1.va.home.com>; from Guido van Rossum on Fri, Sep 15, 2000 at 06:31:52PM -0500
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> <200009152331.SAA01300@cj20424-a.reston1.va.home.com>
Message-ID: <20000915163614.A7376@keymaster.enme.ucalgary.ca>

While we're optimizing the startup time, how about lazying loading the
LICENSE.txt file?

  Neil



From amk1 at erols.com  Sat Sep 16 03:10:30 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Fri, 15 Sep 2000 21:10:30 -0400
Subject: [Python-Dev] Problem with using _xmlplus
Message-ID: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>

The code in Lib/xml/__init__.py seems to be insufficient to completely
delegate matters to the _xmlplus package.  Consider this session with
'python -v':

Script started on Fri Sep 15 21:02:59 2000
[amk at 207-172-111-249 quotations]$ python -v
  ...
>>> from xml.sax import saxlib, saxexts
import xml # directory /usr/lib/python2.0/xml
import xml # precompiled from /usr/lib/python2.0/xml/__init__.pyc
import _xmlplus # directory /usr/lib/python2.0/site-packages/_xmlplus
import _xmlplus # from /usr/lib/python2.0/site-packages/_xmlplus/__init__.py
import xml.sax # directory /usr/lib/python2.0/site-packages/_xmlplus/sax
import xml.sax # from /usr/lib/python2.0/site-packages/_xmlplus/sax/__init__.py
import xml.sax.saxlib # from /usr/lib/python2.0/site-packages/_xmlplus/sax/saxlib.py
import xml.sax.saxexts # from /usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py
import imp # builtin

So far, so good.  Now try creating a parser.  This fails; I've hacked
the code slightly so it doesn't swallow the responsible ImportError:

>>> p=saxexts.XMLParserFactory.make_parser("xml.sax.drivers.drv_pyexpat")
import xml # directory /usr/lib/python2.0/xml
import xml # precompiled from /usr/lib/python2.0/xml/__init__.pyc
import sax # directory /usr/lib/python2.0/xml/sax
import sax # precompiled from /usr/lib/python2.0/xml/sax/__init__.pyc
import sax.handler # precompiled from /usr/lib/python2.0/xml/sax/handler.pyc
import sax.expatreader # precompiled from /usr/lib/python2.0/xml/sax/expatreader.pyc
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py", line 78, in make_parser
    info=rec_find_module(parser_name)
  File "/usr/lib/python2.0/site-packages/_xmlplus/sax/saxexts.py", line 25, in rec_find_module
    lastmod=apply(imp.load_module,info)
  File "/usr/lib/python2.0/xml/sax/__init__.py", line 21, in ?
    from expatreader import ExpatParser
  File "/usr/lib/python2.0/xml/sax/expatreader.py", line 23, in ?
    from xml.sax import xmlreader
ImportError: cannot import name xmlreader

_xmlplus.sax.saxexts uses imp.find_module() and imp.load_module() to
load parser drives; it looks like those functions aren't looking at
sys.modules and therefore aren't being fooled by the sys.modules
hackery in Lib/xml/__init__.py, so the _xmlplus package isn't
completely overriding the xml/ package.

The guts of Python's import machinery have always been mysterious to
me; can anyone suggest how to fix this?

--amk



From guido at beopen.com  Sat Sep 16 04:06:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 15 Sep 2000 21:06:28 -0500
Subject: [Python-Dev] Problem with using _xmlplus
In-Reply-To: Your message of "Fri, 15 Sep 2000 21:10:30 -0400."
             <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com> 
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com> 
Message-ID: <200009160206.VAA09344@cj20424-a.reston1.va.home.com>

[Andrew discovers that the _xmlplus hack is broken]

I have recently proposed a simple and robust fix: forget all import
hacking, and use a different name for the xml package in the core and
the xml package provided by PyXML.  I first suggested the name
'xmlcore' for the core xml package, but Martin von Loewis suggested a
better name: 'xmlbase'.

Since PyXML has had dibs on the 'xml' package name for years, it's
best not to try to change that.  We can't force everyone who has
installed an old version of PyXML to upgrade (and to erase the old
package!) so the best solution is to pick a new name for the core XML
support package.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Sat Sep 16 08:24:41 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 16 Sep 2000 08:24:41 +0200
Subject: [Python-Dev] Re: [XML-SIG] Problem with using _xmlplus
In-Reply-To: 	<E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
	(amk1@erols.com)
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com>
Message-ID: <200009160624.IAA00804@loewis.home.cs.tu-berlin.de>

> The guts of Python's import machinery have always been mysterious to
> me; can anyone suggest how to fix this?

I had a patch for some time on SF, waiting for approval,
(http://sourceforge.net/patch/?func=detailpatch&patch_id=101444&group_id=6473)
to fix that; I have now installed that patch.

Regards,
Martin



From larsga at garshol.priv.no  Sat Sep 16 12:26:34 2000
From: larsga at garshol.priv.no (Lars Marius Garshol)
Date: 16 Sep 2000 12:26:34 +0200
Subject: [XML-SIG] Re: [Python-Dev] Problem with using _xmlplus
In-Reply-To: <200009160206.VAA09344@cj20424-a.reston1.va.home.com>
References: <E13a6Uw-0003ud-00@207-172-111-249.s249.tnt1.ann.va.dialup.rcn.com> <200009160206.VAA09344@cj20424-a.reston1.va.home.com>
Message-ID: <m3lmwsy6n9.fsf@lambda.garshol.priv.no>

* Guido van Rossum
| 
| [suggests: the XML package in the Python core 'xmlbase']
| 
| Since PyXML has had dibs on the 'xml' package name for years, it's
| best not to try to change that.  We can't force everyone who has
| installed an old version of PyXML to upgrade (and to erase the old
| package!) so the best solution is to pick a new name for the core
| XML support package.

For what it's worth: I like this approach very much. It's simple,
intuitive and not likely to cause any problems.

--Lars M.




From mal at lemburg.com  Sat Sep 16 20:19:59 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 16 Sep 2000 20:19:59 +0200
Subject: [Python-Dev] [comp.lang.python] sys.setdefaultencoding (2.0b1)
References: <blhf7h1ebg.fsf@bitdiddle.concentric.net> <200009152331.SAA01300@cj20424-a.reston1.va.home.com>
Message-ID: <39C3B9CF.51441D94@lemburg.com>

Guido van Rossum wrote:
> 
> > I was just reading comp.lang.python and saw an interesting question
> > that I couldn't answer.  Is anyone here game?
> 
> >From reading the source code for unicodeobject.c, _PyUnicode_Init()
> sets the default to "ascii" anyway, to the call in site.py is quite
> unnecessary.  I think it's a good idea to remove it.  (Look around
> though -- there are some "if 0:" blocks that could make it necessary.
> Maybe the setdefaultencoding() call should be inside an "if 0:" block
> too.  With a comment.

Agreed. I'll fix this next week.

Some background: the first codec lookup done causes the encodings
package to be loaded which then registers the encodings package
codec search function. Then the 'ascii' codec is looked up
via the codec registry. All this takes time and should only
be done in case the code really uses codecs... (at least that
was the idea).

> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> > Jeremy
> > ------- Start of forwarded message -------
> > From: Donn Cave <donn at u.washington.edu>
> > Newsgroups: comp.lang.python
> > Subject: sys.setdefaultencoding (2.0b1)
> > Date: 12 Sep 2000 22:11:31 GMT
> > Organization: University of Washington
> > Message-ID: <8pm9mj$3ie2$1 at nntp6.u.washington.edu>
> > Mime-Version: 1.0
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > I see codecs.c has gone to some trouble to defer character encoding
> > setup until it's actually required for something, but it's required
> > rather early in the process anyway when site.py calls
> > sys.setdefaultencoding("ascii")
> >
> > If I strike that line from site.py, startup time goes down by about
> > a third.
> >
> > Is that too simple a fix?  Does setdefaultencoding("ascii") do something
> > important?
> >
> >       Donn Cave, donn at u.washington.edu
> > ------- End of forwarded message -------
> >
> > _______________________________________________
> > Python-Dev mailing list
> > Python-Dev at python.org
> > http://www.python.org/mailman/listinfo/python-dev
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
________________________________________________________________________
Business:                                        http://www.lemburg.com/
Python Pages:                             http://www.lemburg.com/python/



From fdrake at beopen.com  Sun Sep 17 00:10:19 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 16 Sep 2000 18:10:19 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.13,1.14
In-Reply-To: <200009162201.PAA21016@slayer.i.sourceforge.net>
References: <200009162201.PAA21016@slayer.i.sourceforge.net>
Message-ID: <14787.61387.996949.986311@cj42289-a.reston1.va.home.com>

Barry Warsaw writes:
 > Added request for cStringIO.StringIO.readlines() method.  Closes SF
 > bug #110686.

  I think the Patch Manager has a patch for this one, but I don't know
if its any good.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw at beopen.com  Sun Sep 17 00:38:46 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 16 Sep 2000 18:38:46 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.13,1.14
References: <200009162201.PAA21016@slayer.i.sourceforge.net>
	<14787.61387.996949.986311@cj42289-a.reston1.va.home.com>
Message-ID: <14787.63094.667182.915703@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    >> Added request for cStringIO.StringIO.readlines() method.
    >> Closes SF bug #110686.

    Fred>   I think the Patch Manager has a patch for this one, but I
    Fred> don't know if its any good.

It's patch #101423.  JimF, can you take a look and give a thumbs up or
down?  Or better yet, apply it to your canonical copy and send us an
update for the core.

http://sourceforge.net/patch/?func=detailpatch&patch_id=101423&group_id=5470

-Barry


From martin at loewis.home.cs.tu-berlin.de  Sun Sep 17 13:58:32 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 17 Sep 2000 13:58:32 +0200
Subject: [Python-Dev] [ Bug #110662 ] rfc822 (PR#358)
Message-ID: <200009171158.NAA01325@loewis.home.cs.tu-berlin.de>

Regarding your report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=110662&group_id=5470

I can't reproduce the problem. In 2.0b1, 

>>> s="Location: https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004\r\n\r\n" 
>>> t=rfc822.Message(cStringIO.StringIO(s)) 
>>> t['location'] 
'https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004' 

works fine for me. If the line break between Location: and the URL in
the original report was intentional, rfc822.Message is right in
rejecting the header: Continuation lines must start with white space.

I also cannot see how the patch could improve anything; proper
continuation lines are already supported. On what system did you
experience the problem?

If I misunderstood the report, please let me know.

Regards,
Martin


From trentm at ActiveState.com  Sun Sep 17 23:27:18 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 17 Sep 2000 14:27:18 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
Message-ID: <20000917142718.A25180@ActiveState.com>

I get the following error trying to import _tkinter in a Python 2.0 build:

> ./python
./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory


Here is the relevant section of my Modules/Setup:

_tkinter _tkinter.c tkappinit.c -DWITH_APPINIT \
    -I/usr/local/include \
    -I/usr/X11R6/include \
    -L/usr/local/lib \
    -ltk8.3 -ltcl8.3 \
    -L/usr/X11R6/lib \
    -lX11


I got the Tcl/Tk 8.3 source from dev.scriptics.com, and ran
  > ./configure --enable-gcc --enable-shared
  > make
  > make install   # as root
in the tcl and tk source directories.


The tcl and tk libs are in /usr/local/lib:

    [trentm at molotok contrib]$ ls -alF /usr/local/lib
    ...
    -r-xr-xr-x   1 root     root       579177 Sep 17 14:03 libtcl8.3.so*
    -rw-r--r--   1 root     root         1832 Sep 17 14:03 libtclstub8.3.a
    -r-xr-xr-x   1 root     root       778034 Sep 17 14:10 libtk8.3.so*
    -rw-r--r--   1 root     root         3302 Sep 17 14:10 libtkstub8.3.a
    drwxr-xr-x   8 root     root         4096 Sep 17 14:03 tcl8.3/
    -rw-r--r--   1 root     root         6722 Sep 17 14:03 tclConfig.sh
    drwxr-xr-x   4 root     root         4096 Sep 17 14:10 tk8.3/
    -rw-r--r--   1 root     root         3385 Sep 17 14:10 tkConfig.sh


Does anybody know what my problem is? Is the error from libtk8.3.so
complaining that it cannot load a library on which it depends? Is there some
system library dependency that I am likely missing?


Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com


From trentm at ActiveState.com  Sun Sep 17 23:46:14 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 17 Sep 2000 14:46:14 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <20000917142718.A25180@ActiveState.com>; from trentm@ActiveState.com on Sun, Sep 17, 2000 at 02:27:18PM -0700
References: <20000917142718.A25180@ActiveState.com>
Message-ID: <20000917144614.A25718@ActiveState.com>

On Sun, Sep 17, 2000 at 02:27:18PM -0700, Trent Mick wrote:
> 
> I get the following error trying to import _tkinter in a Python 2.0 build:
> 
> > ./python
> ./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory
> 

Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to /usr/local/lib)
and everything is hunky dory. I presumed that /usr/local/lib would be
on the default search path for shared libraries. Bad assumption I guess.

Trent


-- 
Trent Mick
TrentM at ActiveState.com


From martin at loewis.home.cs.tu-berlin.de  Mon Sep 18 08:59:33 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Mon, 18 Sep 2000 08:59:33 +0200
Subject: [Python-Dev] problems importing _tkinter on Linux build
Message-ID: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>

> I presumed that /usr/local/lib would be on the default search path
> for shared libraries. Bad assumption I guess.

On Linux, having /usr/local/lib in the search path is quite
common. The default search path is defined in /etc/ld.so.conf. What
distribution are you using? Perhaps somebody forgot to run
/sbin/ldconfig after installing the tcl library? Does tclsh find it?

Regards,
Martin



From jbearce at copeland.com  Mon Sep 18 13:22:36 2000
From: jbearce at copeland.com (jbearce at copeland.com)
Date: Mon, 18 Sep 2000 07:22:36 -0400
Subject: [Python-Dev] Re: [ Bug #110662 ] rfc822 (PR#358)
Message-ID: <OF66DA0B3D.234625E6-ON8525695E.003DFEEF@rsd.citistreet.org>

No, the line break wasn't intentional.  I ran into this problem on a stock
RedHat 6.2 (intel) system with python 1.5.2 reading pages from an iPlanet
Enterprise Server 4.1 on an NT box.  The patch I included fixed the problem
for me.  This was a consistent problem for me so I should be able to
reproduce the problem, and I send you any new info I can gather.  I'll also
try 2.0b1 with my script to see if it works.

Thanks,
Jim



                                                                                                                                
                    "Martin v. Loewis"                                                                                          
                    <martin at loewis.home.cs.tu-        To:     jbearce at copeland.com                                              
                    berlin.de>                        cc:     python-dev at python.org                                             
                                                      Subject:     [ Bug #110662 ] rfc822 (PR#358)                              
                    09/17/2000 07:58 AM                                                                                         
                                                                                                                                
                                                                                                                                




Regarding your report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=110662&group_id=5470

I can't reproduce the problem. In 2.0b1,

>>> s="Location:
https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004\r\n\r\n
"
>>> t=rfc822.Message(cStringIO.StringIO(s))
>>> t['location']
'https://www.website.com:443/tengah/Dpc/vContent.jhtml?page_type=3&PLANID=4&CONTENTPAGEID=0&TengahSession=312442259237-529/2748412123003458168/-1407548368/4/7002/7002/7004/7004'


works fine for me. If the line break between Location: and the URL in
the original report was intentional, rfc822.Message is right in
rejecting the header: Continuation lines must start with white space.

I also cannot see how the patch could improve anything; proper
continuation lines are already supported. On what system did you
experience the problem?

If I misunderstood the report, please let me know.

Regards,
Martin





From bwarsaw at beopen.com  Mon Sep 18 15:35:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 18 Sep 2000 09:35:32 -0400 (EDT)
Subject: [Python-Dev] problems importing _tkinter on Linux build
References: <20000917142718.A25180@ActiveState.com>
	<20000917144614.A25718@ActiveState.com>
Message-ID: <14790.6692.908424.16235@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to
    TM> /usr/local/lib) and everything is hunky dory. I presumed that
    TM> /usr/local/lib would be on the default search path for shared
    TM> libraries. Bad assumption I guess.

Also, look at the -R flag to ld.  In my experience (primarily on
Solaris), any time you compiled with a -L flag you absolutely /had/ to
include a similar -R flag, otherwise you'd force all your users to set
LD_LIBRARY_PATH.

-Barry


From trentm at ActiveState.com  Mon Sep 18 18:39:04 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 18 Sep 2000 09:39:04 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <14790.6692.908424.16235@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 18, 2000 at 09:35:32AM -0400
References: <20000917142718.A25180@ActiveState.com> <20000917144614.A25718@ActiveState.com> <14790.6692.908424.16235@anthem.concentric.net>
Message-ID: <20000918093904.A23881@ActiveState.com>

On Mon, Sep 18, 2000 at 09:35:32AM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
> 
>     TM> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to
>     TM> /usr/local/lib) and everything is hunky dory. I presumed that
>     TM> /usr/local/lib would be on the default search path for shared
>     TM> libraries. Bad assumption I guess.
> 
> Also, look at the -R flag to ld.  In my experience (primarily on
> Solaris), any time you compiled with a -L flag you absolutely /had/ to
> include a similar -R flag, otherwise you'd force all your users to set
> LD_LIBRARY_PATH.
> 

Thanks, Barry. Reading about -R led me to -rpath, which works for me. Here is
the algorithm from the info docs:

`-rpath-link DIR'
     When using ELF or SunOS, one shared library may require another.
     This happens when an `ld -shared' link includes a shared library
     as one of the input files.

     When the linker encounters such a dependency when doing a
     non-shared, non-relocateable link, it will automatically try to
     locate the required shared library and include it in the link, if
     it is not included explicitly.  In such a case, the `-rpath-link'
     option specifies the first set of directories to search.  The
     `-rpath-link' option may specify a sequence of directory names
     either by specifying a list of names separated by colons, or by
     appearing multiple times.

     The linker uses the following search paths to locate required
     shared libraries.
       1. Any directories specified by `-rpath-link' options.

       2. Any directories specified by `-rpath' options.  The difference
          between `-rpath' and `-rpath-link' is that directories
          specified by `-rpath' options are included in the executable
          and used at runtime, whereas the `-rpath-link' option is only
          effective at link time.

       3. On an ELF system, if the `-rpath' and `rpath-link' options
          were not used, search the contents of the environment variable
          `LD_RUN_PATH'.

       4. On SunOS, if the `-rpath' option was not used, search any
          directories specified using `-L' options.

       5. For a native linker, the contents of the environment variable
          `LD_LIBRARY_PATH'.

       6. The default directories, normally `/lib' and `/usr/lib'.

     For the native ELF linker, as the last resort, the contents of
     /etc/ld.so.conf is used to build the set of directories to search.

     If the required shared library is not found, the linker will issue
     a warning and continue with the link.


Trent


-- 
Trent Mick
TrentM at ActiveState.com


From trentm at ActiveState.com  Mon Sep 18 18:42:51 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 18 Sep 2000 09:42:51 -0700
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>; from martin@loewis.home.cs.tu-berlin.de on Mon, Sep 18, 2000 at 08:59:33AM +0200
References: <200009180659.IAA14068@loewis.home.cs.tu-berlin.de>
Message-ID: <20000918094251.B23881@ActiveState.com>

On Mon, Sep 18, 2000 at 08:59:33AM +0200, Martin v. Loewis wrote:
> > I presumed that /usr/local/lib would be on the default search path
> > for shared libraries. Bad assumption I guess.
> 
> On Linux, having /usr/local/lib in the search path is quite
> common. The default search path is defined in /etc/ld.so.conf. What
> distribution are you using? Perhaps somebody forgot to run
> /sbin/ldconfig after installing the tcl library? Does tclsh find it?

Using RedHat 6.2


[trentm at molotok ~]$ cat /etc/ld.so.conf
/usr/X11R6/lib
/usr/i486-linux-libc5/lib


So no /usr/local/lib there. Barry's suggestion worked for me, though I think
I agree that /usr/local/lib is a reason include path.

Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com


From jeremy at beopen.com  Tue Sep 19 00:33:02 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 18 Sep 2000 18:33:02 -0400 (EDT)
Subject: [Python-Dev] guidelines for bug triage
Message-ID: <14790.38942.543387.233812@bitdiddle.concentric.net>

Last week I promised to post some guidelines on bug triage.  In the
interim, the number of open bugs has dropped by about 30.  We still
have 71 open bugs to deal with.  The goal is to get the number of open
bugs below 50 before the 2.0b2 release next week, so there is still a
lot to do.  So I've written up some general guidelines, which I'll
probably put in a PEP.

One thing that the guidelines lack are a list of people willing to
handle bug reports and their areas of expertise.  If people send me
email with that information, I'll include it in the PEP.

Jeremy


1. Make sure the bug category and bug group are correct.  If they are 
   correct, it is easier for someone interested in helping to find
   out, say, what all the open Tkinter bugs are.

2. If it's a minor feature request that you don't plan to address
   right away, add it to PEP 42 or ask the owner to add it for you.
   If you add the bug to PEP 42, mark the bug as "feature request",
   "later", and "closed"; and add a comment to the bug saying that
   this is the case (mentioning the PEP explicitly).

3. Assign the bug a reasonable priority.  We don't yet have a clear
   sense of what each priority should mean, except than 9 is highest
   and 1 is lowest.  One rule, however, is that bugs with priority
   seven or higher must be fixed before the next release.

4. If a bug report doesn't have enough information to allow you to
   reproduce or diagnose it, send email to the original submittor and
   ask for more information.  If the original report is really thin
   and your email doesn't get a response after a reasonable waiting
   period, you can close the bug.

5. If you fix a bug, mark the status as "Fixed" and close it.  In the
   comments, including the CVS revision numbers of the affected
   files.  In the CVS checkin message, include the SourceForge bug
   number *and* a normal description of the change.

6. If you are assigned a bug that you are unable to deal with assign
   it to someone else.  The guys at PythonLabs get paid to fix these
   bugs, so pick one of them if there is no other obvious candidate.



From barry at scottb.demon.co.uk  Tue Sep 19 00:28:46 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Mon, 18 Sep 2000 23:28:46 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <000001c021bf$cf081f20$060210ac@private>

I have managed to get all our critical python code up and
running under 2.0b1#4, around 15,000 lines. We use win32com
and wxPython extensions. The code drive SourceSafe and includes
a Web server that schedules builds for us.

The only problem I encounted was the problem of mixing string
and unicode types.

Using the smtplib I was passing in a unicode type as the body
of the message. The send() call hangs. I use encode() and all
is well.

Is this a user error in the use of smtplib or a bug?

I found that I had a lot of unicode floating around from win32com
that I was passing into wxPython. It checks for string and raises
exceptions. More use of encode() and we are up and running.

Is this what you expected when you added unicode?

		Barry



From barry at scottb.demon.co.uk  Tue Sep 19 00:43:59 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Mon, 18 Sep 2000 23:43:59 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>
Message-ID: <000201c021c1$ef71c7f0$060210ac@private>

At the risk of having my head bitten off again...

Why don't you tell people how to report bugs in python on the web site
or the documentation?

I'd expect this info in the docs and on the web site for python.

	BArry



From guido at beopen.com  Tue Sep 19 01:45:12 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 18:45:12 -0500
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: Your message of "Mon, 18 Sep 2000 23:28:46 +0100."
             <000001c021bf$cf081f20$060210ac@private> 
References: <000001c021bf$cf081f20$060210ac@private> 
Message-ID: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>

> I have managed to get all our critical python code up and
> running under 2.0b1#4, around 15,000 lines. We use win32com
> and wxPython extensions. The code drive SourceSafe and includes
> a Web server that schedules builds for us.
> 
> The only problem I encounted was the problem of mixing string
> and unicode types.
> 
> Using the smtplib I was passing in a unicode type as the body
> of the message. The send() call hangs. I use encode() and all
> is well.
> 
> Is this a user error in the use of smtplib or a bug?
> 
> I found that I had a lot of unicode floating around from win32com
> that I was passing into wxPython. It checks for string and raises
> exceptions. More use of encode() and we are up and running.
> 
> Is this what you expected when you added unicode?

Barry, I'm unclear on what exactly is happening.  Where does the
Unicode come from?  You implied that your code worked under 1.5.2,
which doesn't support Unicode.  How can code that works under 1.5.2
suddenly start producing Unicode strings?  Unless you're now applying
the existing code to new (Unicode) input data -- in which case, yes,
we expect that fixes are sometimes needed.

The smtplib problem may be easily explained -- AFAIK, the SMTP
protocol doesn't support Unicode, and the module isn't Unicode-aware,
so it is probably writing garbage to the socket.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido at beopen.com  Tue Sep 19 01:51:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 18:51:26 -0500
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: Your message of "Mon, 18 Sep 2000 23:43:59 +0100."
             <000201c021c1$ef71c7f0$060210ac@private> 
References: <000201c021c1$ef71c7f0$060210ac@private> 
Message-ID: <200009182351.SAA03195@cj20424-a.reston1.va.home.com>

> At the risk of having my head bitten off again...

Don't worry, it's only a virtual bite... :-)

> Why don't you tell people how to report bugs in python on the web site
> or the documentation?
> 
> I'd expect this info in the docs and on the web site for python.

In the README file:

    Bug reports
    -----------

    To report or search for bugs, please use the Python Bug
    Tracker at http://sourceforge.net/bugs/?group_id=5470.

But I agree that nobody reads the README file any more.  So yes, it
should be added to the website.  I don't think it belongs in the
documentation pack, although Fred may disagree (where should it be
added?).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From barry at scottb.demon.co.uk  Tue Sep 19 01:00:13 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 00:00:13 +0100
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <200009081623.SAA14090@python.inrialpes.fr>
Message-ID: <000701c021c4$3412d550$060210ac@private>

There needs to be a set of benchmarks that can be used to test the effect
of any changes. Is there a set that exist already that can be used?

		Barry


> Behalf Of Vladimir Marangozov
> 
> Continuing my impressions on the user's feedback to date: Donn Cave
> & MAL are at least two voices I've heard about an overall slowdown
> of the 2.0b1 release compared to 1.5.2. Frankly, I have no idea where
> this slowdown comes from and I believe that we have only vague guesses
> about the possible causes: unicode database, more opcodes in ceval, etc.
> 
> I wonder whether we are in a position to try improving Python's
> performance with some `wise quickies' in a next beta. But this raises
> a more fundamental question on what is our margin for manoeuvres at this
> point. This in turn implies that we need some classification of the
> proposed optimizations to date.
> 
> Perhaps it would be good to create a dedicated Web page for this, but
> in the meantime, let's try to build a list/table of the ideas that have
> been proposed so far. This would be useful anyway, and the list would be
> filled as time goes.
> 
> Trying to push this initiative one step further, here's a very rough start
> on the top of my head:
> 
> Category 1: Algorithmic Changes
> 
> These are the most promising, since they don't relate to pure technicalities
> but imply potential improvements with some evidence.
> I'd put in this category:
> 
> - the dynamic dictionary/string specialization by Fred Drake
>   (this is already in). Can this be applied in other areas? If so, where?
> 
> - the Python-specific mallocs. Actually, I'm pretty sure that a lot of
>   `overhead' is due to the standard mallocs which happen to be expensive
>   for Python in both space and time. Python is very malloc-intensive.
>   The only reason I've postponed my obmalloc patch is that I still haven't
>   provided an interface which allows evaluating it's impact on the
>   mem size consumption. It gives noticeable speedup on all machines, so
>   it accounts as a good candidate w.r.t. performance.
> 
> - ??? (maybe some parts of MAL's optimizations could go here)
> 
> Category 2: Technical / Code optimizations
> 
> This category includes all (more or less) controversial proposals, like
> 
> - my latest lookdict optimizations (a typical controversial `quickie')
> 
> - opcode folding & reordering. Actually, I'm unclear on why Guido
>   postponed the reordering idea; it has received positive feedback
>   and all theoretical reasoning and practical experiments showed that
>   this "could" help, although without any guarantees. Nobody reported
>   slowdowns, though. This is typically a change without real dangers.
> 
> - kill the async / pending calls logic. (Tim, what happened with this
>   proposal?)
> 
> - compact the unicodedata database, which is expected to reduce the
>   mem footprint, maybe improve startup time, etc. (ongoing)
> 
> - proposal about optimizing the "file hits" on startup.
> 
> - others?
> 
> If there are potential `wise quickies', meybe it's good to refresh
> them now and experiment a bit more before the final release?
> 
> -- 
>        Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
> http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From MarkH at ActiveState.com  Tue Sep 19 01:18:18 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 19 Sep 2000 10:18:18 +1100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEPDDJAA.MarkH@ActiveState.com>

[Guido]

> Barry, I'm unclear on what exactly is happening.  Where does the
> Unicode come from?  You implied that your code worked under 1.5.2,
> which doesn't support Unicode.  How can code that works under 1.5.2
> suddenly start producing Unicode strings?  Unless you're now applying
> the existing code to new (Unicode) input data -- in which case, yes,
> we expect that fixes are sometimes needed.

My guess is that the Unicode strings are coming from COM.  In 1.5, we used
the Win32 specific Unicode object, and win32com did lots of explicit
str()s - the user of the end object usually saw real Python strings.

For 1.6 and later, I changed this, so that real Python Unicode objects are
used and returned instead of the strings.  I figured this would be a good
test for Unicode integration, as Unicode and strings are ultimately
supposed to be interchangeable ;-)

win32com.client.__init__ starts with:

NeedUnicodeConversions = not hasattr(__builtin__, "unicode")

This forces the flag "true" 1.5, and false otherwise.  Barry can force it
to "true", and win32com will always force a str() over all Unicode objects.

However, this will _still_ break in a few cases (and I have had some
reported).  str() of a Unicode object can often raise that ugly "char out
of range" error.  As Barry notes, the code would have to change to do an
"encode('mbcs')" to be safe anyway...

But regardless of where Barry's Unicode objects come from, his point
remains open.  Do we consider the library's lack of Unicode awareness a
bug, or do we drop any pretence of string and unicode objects being
interchangeable?

As a related issue, do we consider that str(unicode_ob) often fails is a
problem?  The users on c.l.py appear to...

Mark.



From gward at mems-exchange.org  Tue Sep 19 01:29:00 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 18 Sep 2000 19:29:00 -0400
Subject: [Python-Dev] Speaking of bug triage...
Message-ID: <20000918192859.A12253@ludwig.cnri.reston.va.us>

... just what are the different categories supposed to mean?
Specifically, what's the difference between "Library" and "Modules"?

The library-related open bugs in the "Library" category cover the
following modules:
  * anydbm
  * rfc822 (several!)
  * mimedecode
  * urlparse
  * cmath
  * CGIHTTPServer

And in the "Modules" category we have:
  * mailbox
  * socket/os
  * re/sre (several)
  * anydbm
  * xml/_xmlplus
  * cgi/xml

Hmmm... looks to me like there's no difference between "Library" and
"Modules" -- heck, I could have guessed that just from looking at the
names.  The library *is* modules!

Was this perhaps meant to be a distinction between pure Python and
extension modules?

        Greg


From jeremy at beopen.com  Tue Sep 19 01:36:41 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 18 Sep 2000 19:36:41 -0400 (EDT)
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <20000918192859.A12253@ludwig.cnri.reston.va.us>
References: <20000918192859.A12253@ludwig.cnri.reston.va.us>
Message-ID: <14790.42761.418440.578432@bitdiddle.concentric.net>

>>>>> "GW" == Greg Ward <gward at mems-exchange.org> writes:

  GW> Was this perhaps meant to be a distinction between pure Python
  GW> and extension modules?

That's right -- Library == ".py" and Modules == ".c".  Perhaps not the
best names, but they're short.

Jeremy


From tim_one at email.msn.com  Tue Sep 19 01:34:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 18 Sep 2000 19:34:30 -0400
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <20000918192859.A12253@ludwig.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>

[Greg Ward]
> ... just what are the different categories supposed to mean?
> Specifically, what's the difference between "Library" and "Modules"?

Nobody knows.  I've been using Library for .py files under Lib/, and Modules
for anything written in C whose name works in an "import".  Other people are
doing other things, but they're wrong <wink>.




From guido at beopen.com  Tue Sep 19 02:43:17 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 18 Sep 2000 19:43:17 -0500
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: Your message of "Mon, 18 Sep 2000 19:36:41 -0400."
             <14790.42761.418440.578432@bitdiddle.concentric.net> 
References: <20000918192859.A12253@ludwig.cnri.reston.va.us>  
            <14790.42761.418440.578432@bitdiddle.concentric.net> 
Message-ID: <200009190043.TAA06331@cj20424-a.reston1.va.home.com>

>   GW> Was this perhaps meant to be a distinction between pure Python
>   GW> and extension modules?
> 
> That's right -- Library == ".py" and Modules == ".c".  Perhaps not the
> best names, but they're short.

Think "subdirectories in the source tree" and you'll never make a
mistake again.  (For this particular choice. :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From barry at scottb.demon.co.uk  Tue Sep 19 01:43:25 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 00:43:25 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <200009182345.SAA03116@cj20424-a.reston1.va.home.com>
Message-ID: <000801c021ca$3c9daa50$060210ac@private>

Mark's Python COM code is the source of unicode. I'm guessing that the old
1.5.2 support coerced to string and now that unicode is around Mark's
code gives me unicode strings. Our app is driving Microsoft visual
SourceSafe thru COM.

The offending line that upgraded all strings to unicode that broke mail:

file.write( 'Crit: Searching for new and changed files since label %s\n' % previous_source_label )

previous_source_label is unicode from a call to a COM object.

file is a StringIO object.

		Barry

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Guido van Rossum
> Sent: 19 September 2000 00:45
> To: Barry Scott
> Cc: PythonDev
> Subject: Re: [Python-Dev] Python 1.5.2 modules need porting to 2.0
> because of unicode - comments please
> 
> 
> > I have managed to get all our critical python code up and
> > running under 2.0b1#4, around 15,000 lines. We use win32com
> > and wxPython extensions. The code drive SourceSafe and includes
> > a Web server that schedules builds for us.
> > 
> > The only problem I encounted was the problem of mixing string
> > and unicode types.
> > 
> > Using the smtplib I was passing in a unicode type as the body
> > of the message. The send() call hangs. I use encode() and all
> > is well.
> > 
> > Is this a user error in the use of smtplib or a bug?
> > 
> > I found that I had a lot of unicode floating around from win32com
> > that I was passing into wxPython. It checks for string and raises
> > exceptions. More use of encode() and we are up and running.
> > 
> > Is this what you expected when you added unicode?
> 
> Barry, I'm unclear on what exactly is happening.  Where does the
> Unicode come from?  You implied that your code worked under 1.5.2,
> which doesn't support Unicode.  How can code that works under 1.5.2
> suddenly start producing Unicode strings?  Unless you're now applying
> the existing code to new (Unicode) input data -- in which case, yes,
> we expect that fixes are sometimes needed.
> 
> The smtplib problem may be easily explained -- AFAIK, the SMTP
> protocol doesn't support Unicode, and the module isn't Unicode-aware,
> so it is probably writing garbage to the socket.
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From fdrake at beopen.com  Tue Sep 19 01:45:55 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 18 Sep 2000 19:45:55 -0400 (EDT)
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <000201c021c1$ef71c7f0$060210ac@private>
References: <LNBBLJKPBEHFEDALKOLCCEEIHFAA.tim_one@email.msn.com>
	<000201c021c1$ef71c7f0$060210ac@private>
Message-ID: <14790.43315.8034.192884@cj42289-a.reston1.va.home.com>

Barry Scott writes:
 > At the risk of having my head bitten off again...
 > 
 > Why don't you tell people how to report bugs in python on the web site
 > or the documentation?
 > 
 > I'd expect this info in the docs and on the web site for python.

  Good point.  I think this should be available at both locations as
well.  I'll see what I can do about the documentation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gward at mems-exchange.org  Tue Sep 19 01:55:35 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 18 Sep 2000 19:55:35 -0400
Subject: [Python-Dev] Speaking of bug triage...
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Sep 18, 2000 at 07:34:30PM -0400
References: <20000918192859.A12253@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCGEMNHGAA.tim_one@email.msn.com>
Message-ID: <20000918195535.A19131@ludwig.cnri.reston.va.us>

On 18 September 2000, Tim Peters said:
> Nobody knows.  I've been using Library for .py files under Lib/, and Modules
> for anything written in C whose name works in an "import".  Other people are
> doing other things, but they're wrong <wink>.

That's what I suspected.  I've just reclassified a couple of bugs.  I
left ambiguous ones where they were.

        Greg


From barry at scottb.demon.co.uk  Tue Sep 19 02:05:17 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 01:05:17 +0100
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEPDDJAA.MarkH@ActiveState.com>
Message-ID: <000901c021cd$4a9b2df0$060210ac@private>

> But regardless of where Barry's Unicode objects come from, his point
> remains open.  Do we consider the library's lack of Unicode awareness a
> bug, or do we drop any pretence of string and unicode objects being
> interchangeable?
> 
> As a related issue, do we consider that str(unicode_ob) often fails is a
> problem?  The users on c.l.py appear to...
> 
> Mark.

Exactly.

I want unicode from Mark's code, unicode is goodness.

But the principle of least astonishment may well be broken in the library,
indeed in the language.

It took me 40 minutes to prove that the unicode came from Mark's code and
I know the code involved intimately. Debugging these failures is tedious.

I don't have an opinion as to the best resolution yet.

One option would be for Mark's code to default to string. But that does not
help once someone chooses to enable unicode in Mark's code.

Maybe '%s' % u'x' should return 'x' not u'x' and u'%s' % 's' return u's'

Maybe 's' + u'x' should return 'sx' not u'sx'. and u's' + 'x' returns u'sx'

The above 2 maybe's would have hidden the problem in my code, baring exceptions.

	Barry



From barry at scottb.demon.co.uk  Tue Sep 19 02:13:33 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 19 Sep 2000 01:13:33 +0100
Subject: [Python-Dev] How do you want bugs reported against 2.0 beta?
In-Reply-To: <200009182351.SAA03195@cj20424-a.reston1.va.home.com>
Message-ID: <000a01c021ce$72b5cab0$060210ac@private>

What README? Its not on my Start - Programs - Python 2.0 menu.

You don't mean I have to look on the disk do you :-)

	Barry

> -----Original Message-----
> From: guido at cj20424-a.reston1.va.home.com
> [mailto:guido at cj20424-a.reston1.va.home.com]On Behalf Of Guido van
> Rossum
> Sent: 19 September 2000 00:51
> To: Barry Scott
> Cc: PythonDev
> Subject: Re: [Python-Dev] How do you want bugs reported against 2.0
> beta?
> 
> 
> > At the risk of having my head bitten off again...
> 
> Don't worry, it's only a virtual bite... :-)
> 
> > Why don't you tell people how to report bugs in python on the web site
> > or the documentation?
> > 
> > I'd expect this info in the docs and on the web site for python.
> 
> In the README file:
> 
>     Bug reports
>     -----------
> 
>     To report or search for bugs, please use the Python Bug
>     Tracker at http://sourceforge.net/bugs/?group_id=5470.
> 
> But I agree that nobody reads the README file any more.  So yes, it
> should be added to the website.  I don't think it belongs in the
> documentation pack, although Fred may disagree (where should it be
> added?).
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 


From tim_one at email.msn.com  Tue Sep 19 02:22:13 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 18 Sep 2000 20:22:13 -0400
Subject: [Python-Dev] 2.0 Optimization & speed
In-Reply-To: <000701c021c4$3412d550$060210ac@private>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENBHGAA.tim_one@email.msn.com>

[Barry Scott]
> There needs to be a set of benchmarks that can be used to test
> the effect of any changes. Is there a set that exist already that
> can be used?

None adequate.  Calls for volunteers in the past have been met with silence.

Lib/test/pyttone.py is remarkable in that it be the least typical of all
Python programs <0.4 wink>.  It seems a good measure of how long it takes to
make a trip around the eval loop, though.

Marc-Andre Lemburg put together a much fancier suite, that times a wide
variety of basic Python operations and constructs more-or-less in isolation
from each other.  It can be very helpful in pinpointing specific timing
regressions.

That's it.




From tim_one at email.msn.com  Tue Sep 19 06:44:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 19 Sep 2000 00:44:56 -0400
Subject: [Python-Dev] test_minidom now failing on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com>

http://sourceforge.net/bugs/?func=detailbug&bug_id=114775&group_id=5470

Add info (fails on Linux?  Windows-specific?) or fix or something; assigned
to Paul.




From guido at beopen.com  Tue Sep 19 08:05:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 01:05:55 -0500
Subject: [Python-Dev] test_minidom now failing on Windows
In-Reply-To: Your message of "Tue, 19 Sep 2000 00:44:56 -0400."
             <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCGENMHGAA.tim_one@email.msn.com> 
Message-ID: <200009190605.BAA01019@cj20424-a.reston1.va.home.com>

> http://sourceforge.net/bugs/?func=detailbug&bug_id=114775&group_id=5470
> 
> Add info (fails on Linux?  Windows-specific?) or fix or something; assigned
> to Paul.

It's obviously broken.  The test output contains numbers that are
specific per run:

<xml.dom.minidom.Document instance at 0xa104c8c>

and

[('168820100<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}"), ('168926628<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168722260<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168655020<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168650868<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168663308<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168846892<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('169039972<class xml.dom.minidom.Text at 0xa0ccfac>', "{'childNodes': []}"), ('168666508<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}"), ('168730780<class xml.dom.minidom.Element at 0xa0cc58c>', "{'childNodes': []}")]

Paul, please fix this!!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From martin at loewis.home.cs.tu-berlin.de  Tue Sep 19 10:13:16 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 19 Sep 2000 10:13:16 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>

> The smtplib problem may be easily explained -- AFAIK, the SMTP
> protocol doesn't support Unicode, and the module isn't
> Unicode-aware, so it is probably writing garbage to the socket.

I've investigated this somewhat, and noticed the cause of the problem.
The send method of the socket passes the raw memory representation of
the Unicode object to send(2). On i386, this comes out as UTF-16LE.

It appears that this behaviour is not documented anywhere (where is
the original specification of the Unicode type, anyway).

I believe this behaviour is a bug, on the grounds of being
confusing. The same holds for writing a Unicode string to a file in
binary mode. Again, it should not write out the internal
representation. Or else, why doesn't file.write(42) work? I want that
it writes the internal representation in binary :-)

So in essence, I suggest that the Unicode object does not implement
the buffer interface. If that has any undesirable consequences (which
ones?), I suggest that 'binary write' operations (sockets, files)
explicitly check for Unicode objects, and either reject them, or
invoke the system encoding (i.e. ASCII). 

In the case of smtplib, this would do the right thing: the protocol
requires ASCII commands, so if anybody passes a Unicode string with
characters outside ASCII, you'd get an error.

Regards,
Martin



From effbot at telia.com  Tue Sep 19 10:35:29 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 19 Sep 2000 10:35:29 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>
Message-ID: <00cd01c02214$94c4f540$766940d5@hagrid>

martin wrote:

> I've investigated this somewhat, and noticed the cause of the problem.
> The send method of the socket passes the raw memory representation of
> the Unicode object to send(2). On i386, this comes out as UTF-16LE.
...
> I believe this behaviour is a bug, on the grounds of being
> confusing. The same holds for writing a Unicode string to a file in
> binary mode. Again, it should not write out the internal
> representation. Or else, why doesn't file.write(42) work? I want that
> it writes the internal representation in binary :-)
...
> So in essence, I suggest that the Unicode object does not implement
> the buffer interface.

I agree.

</F>



From mal at lemburg.com  Tue Sep 19 10:35:33 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 10:35:33 +0200
Subject: [Python-Dev] 2.0 Optimization & speed
References: <LNBBLJKPBEHFEDALKOLCKENBHGAA.tim_one@email.msn.com>
Message-ID: <39C72555.E14D747C@lemburg.com>

Tim Peters wrote:
> 
> [Barry Scott]
> > There needs to be a set of benchmarks that can be used to test
> > the effect of any changes. Is there a set that exist already that
> > can be used?
> 
> None adequate.  Calls for volunteers in the past have been met with silence.
> 
> Lib/test/pyttone.py is remarkable in that it be the least typical of all
> Python programs <0.4 wink>.  It seems a good measure of how long it takes to
> make a trip around the eval loop, though.
> 
> Marc-Andre Lemburg put together a much fancier suite, that times a wide
> variety of basic Python operations and constructs more-or-less in isolation
> from each other.  It can be very helpful in pinpointing specific timing
> regressions.

Plus it's extensible, so you can add whatever test you feel you
need by simply dropping in a new module and editing a Setup
module. pybench is available from my Python Pages.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Tue Sep 19 11:02:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 11:02:46 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de>
Message-ID: <39C72BB6.A45A8E77@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > The smtplib problem may be easily explained -- AFAIK, the SMTP
> > protocol doesn't support Unicode, and the module isn't
> > Unicode-aware, so it is probably writing garbage to the socket.
> 
> I've investigated this somewhat, and noticed the cause of the problem.
> The send method of the socket passes the raw memory representation of
> the Unicode object to send(2). On i386, this comes out as UTF-16LE.

The send method probably uses "s#" to write out the data. Since
this maps to the getreadbuf buffer slot, the Unicode object returns
a pointer to the internal buffer.
 
> It appears that this behaviour is not documented anywhere (where is
> the original specification of the Unicode type, anyway).

Misc/unicode.txt has it all. Documentation for PyArg_ParseTuple()
et al. is in Doc/ext/ext.tex.
 
> I believe this behaviour is a bug, on the grounds of being
> confusing. The same holds for writing a Unicode string to a file in
> binary mode. Again, it should not write out the internal
> representation. Or else, why doesn't file.write(42) work? I want that
> it writes the internal representation in binary :-)

This was discussed on python-dev at length earlier this year.
The outcome was that files opened in binary mode should write
raw object data to the file (using getreadbuf) while file's opened
in text mode should write character data (using getcharbuf).
 
Note that Unicode objects are the first to make a difference
between getcharbuf and getreadbuf.

IMHO, the bug really is in getargs.c: "s" uses getcharbuf while
"s#" uses getreadbuf. Ideal would be using "t"+"t#" exclusively
for getcharbuf and "s"+"s#" exclusively for getreadbuf, but I guess
common usage prevents this.

> So in essence, I suggest that the Unicode object does not implement
> the buffer interface. If that has any undesirable consequences (which
> ones?), I suggest that 'binary write' operations (sockets, files)
> explicitly check for Unicode objects, and either reject them, or
> invoke the system encoding (i.e. ASCII).

It's too late for any generic changes in the Unicode area.

The right thing to do is to make the *tools* Unicode aware, since
you can't really expect the Unicode-string integration mechanism 
to fiddle things right in every possible case out there.

E.g. in the above case it is clear that 8-bit text is being sent over
the wire, so the smtplib module should explicitly call the .encode()
method to encode the data into whatever encoding is suitable.

> In the case of smtplib, this would do the right thing: the protocol
> requires ASCII commands, so if anybody passes a Unicode string with
> characters outside ASCII, you'd get an error.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Tue Sep 19 11:13:13 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 11:13:13 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <000901c021cd$4a9b2df0$060210ac@private>
Message-ID: <39C72E29.6593F920@lemburg.com>

Barry Scott wrote:
> 
> > But regardless of where Barry's Unicode objects come from, his point
> > remains open.  Do we consider the library's lack of Unicode awareness a
> > bug, or do we drop any pretence of string and unicode objects being
> > interchangeable?

Python's stdlib is *not* Unicode ready. This should be seen a project
for 2.1.

> > As a related issue, do we consider that str(unicode_ob) often fails is a
> > problem?  The users on c.l.py appear to...

It will only fail if the Unicode object is not compatible with the
default encoding. If users want to use a different encoding for
interfacing Unicode to strings they should call .encode explicitely,
possible through a helper function.

> > Mark.
> 
> Exactly.
> 
> I want unicode from Mark's code, unicode is goodness.
> 
> But the principle of least astonishment may well be broken in the library,
> indeed in the language.
> 
> It took me 40 minutes to prove that the unicode came from Mark's code and
> I know the code involved intimately. Debugging these failures is tedious.

To debug these things, simply switch off Unicode to string conversion
by editing site.py (look at the comments at the end of the module).
All conversion tries will then result in an exception.

> I don't have an opinion as to the best resolution yet.
> 
> One option would be for Mark's code to default to string. But that does not
> help once someone chooses to enable unicode in Mark's code.
> 
> Maybe '%s' % u'x' should return 'x' not u'x' and u'%s' % 's' return u's'
> 
> Maybe 's' + u'x' should return 'sx' not u'sx'. and u's' + 'x' returns u'sx'
> 
> The above 2 maybe's would have hidden the problem in my code, baring exceptions.

When designing the Unicode-string integration we decided to
use the same coercion rules as for numbers: always coerce to the
"bigger" type. Anything else would have caused even more
difficulties.

Again, what needs to be done is to make the tools Unicode aware,
not the magic ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fredrik at pythonware.com  Tue Sep 19 11:38:01 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 19 Sep 2000 11:38:01 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de> <39C72BB6.A45A8E77@lemburg.com>
Message-ID: <006601c0221d$4e55b690$0900a8c0@SPIFF>

mal wrote:

> > So in essence, I suggest that the Unicode object does not implement
> > the buffer interface. If that has any undesirable consequences (which
> > ones?), I suggest that 'binary write' operations (sockets, files)
> > explicitly check for Unicode objects, and either reject them, or
> > invoke the system encoding (i.e. ASCII).
> 
> It's too late for any generic changes in the Unicode area.

it's not too late to fix bugs.

> The right thing to do is to make the *tools* Unicode aware, since
> you can't really expect the Unicode-string integration mechanism 
> to fiddle things right in every possible case out there.

no, but people may expect Python to raise an exception instead
of doing something that is not only non-portable, but also clearly
wrong in most real-life cases.

</F>



From mal at lemburg.com  Tue Sep 19 12:34:40 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 12:34:40 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <200009190813.KAA01033@loewis.home.cs.tu-berlin.de> <39C72BB6.A45A8E77@lemburg.com> <006601c0221d$4e55b690$0900a8c0@SPIFF>
Message-ID: <39C74140.B4A31C60@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > > So in essence, I suggest that the Unicode object does not implement
> > > the buffer interface. If that has any undesirable consequences (which
> > > ones?), I suggest that 'binary write' operations (sockets, files)
> > > explicitly check for Unicode objects, and either reject them, or
> > > invoke the system encoding (i.e. ASCII).
> >
> > It's too late for any generic changes in the Unicode area.
> 
> it's not too late to fix bugs.

I doubt that we can fix all Unicode related bugs in the 2.0
stdlib before the final release... let's make this a project 
for 2.1.
 
> > The right thing to do is to make the *tools* Unicode aware, since
> > you can't really expect the Unicode-string integration mechanism
> > to fiddle things right in every possible case out there.
> 
> no, but people may expect Python to raise an exception instead
> of doing something that is not only non-portable, but also clearly
> wrong in most real-life cases.

I completely agree that the divergence between "s" and "s#"
is not ideal at all, but that's something the buffer interface
design has to fix (not the Unicode design) since this is a
general problem. AFAIK, no other object makes a difference
between getreadbuf and getcharbuf... this is why the problem
has never shown up before.

Grepping through the stdlib, there are lots of places where
"s#" is expected to work on raw data and others where
conversion to string would be more appropriate, so the one
true solution is not clear at all.

Here are some possible hacks to work-around the Unicode problem:

1. switch off getreadbuf slot

   This would break many IO-calls w/r to Unicode support.

2. make getreadbuf return the same as getcharbuf (i.e. ASCII data)

   This could work, but would break slicing and indexing 
   for e.g. a UTF-8 default encoding.   

3. leave things as they are implemented now and live with the
   consequences (mark the Python stdlib as not Unicode compatible)

   Not ideal, but leaves room for discussion.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From loewis at informatik.hu-berlin.de  Tue Sep 19 14:11:00 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Tue, 19 Sep 2000 14:11:00 +0200 (MET DST)
Subject: [Python-Dev] sizehint in readlines
Message-ID: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>

I've added support for the sizehint parameter in all places where it
was missing and the documentation referred to the file objects section
(socket, StringIO, cStringIO). The only remaining place with a
readlines function without sizehint is in multifile.py. I'll observe
that the documentation of this module is quite confused: it mentions a
str parameter for readline and readlines.

Should multifile.MultiFile.readlines also support the sizehint? (note
that read() deliberately does not support a size argument).

Regards,
Martin


From loewis at informatik.hu-berlin.de  Tue Sep 19 14:16:29 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Tue, 19 Sep 2000 14:16:29 +0200 (MET DST)
Subject: [Python-Dev] fileno function in file objects
Message-ID: <200009191216.OAA06594@pandora.informatik.hu-berlin.de>

Section 2.1.7.9 of the library reference explains that file objects
support a fileno method. Is that a mandatory operation on file-like
objects (e.g. StringIO)? If so, how should it be implemented? If not,
shouldn't the documentation declare it optional?

The same question for documented attributes: closed, mode, name,
softspace: need file-like objects to support them?

Regards,
Martin


From mal at lemburg.com  Tue Sep 19 14:42:24 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 14:42:24 +0200
Subject: [Python-Dev] sizehint in readlines
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
Message-ID: <39C75F30.D23CEEF0@lemburg.com>

Martin von Loewis wrote:
> 
> I've added support for the sizehint parameter in all places where it
> was missing and the documentation referred to the file objects section
> (socket, StringIO, cStringIO). The only remaining place with a
> readlines function without sizehint is in multifile.py. I'll observe
> that the documentation of this module is quite confused: it mentions a
> str parameter for readline and readlines.
> 
> Should multifile.MultiFile.readlines also support the sizehint? (note
> that read() deliberately does not support a size argument).

Since it is an optional hint for the implementation, I'd suggest
adding the optional parameter without actually making any use of
it. The interface should be there though.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Tue Sep 19 15:01:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 15:01:34 +0200
Subject: [Python-Dev] Deja-Search on python.org defunct
Message-ID: <39C763AE.4B126CB1@lemburg.com>

The search button on python.org doesn't search the c.l.p newsgroup
anymore, but instead doea a search over all newsgroups.

This link works:

http://www.deja.com/[ST_rn=ps]/qs.xp?ST=PS&svcclass=dnyr&firstsearch=yes&QRY=search_string_goes_here&defaultOp=AND&DBS=1&OP=dnquery.xp&LNG=english&subjects=&groups=comp.lang.python+comp.lang.python.announce&authors=&fromdate=&todate=&showsort=score&maxhits=25

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido at beopen.com  Tue Sep 19 16:28:42 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 09:28:42 -0500
Subject: [Python-Dev] sizehint in readlines
In-Reply-To: Your message of "Tue, 19 Sep 2000 14:11:00 +0200."
             <200009191211.OAA06549@pandora.informatik.hu-berlin.de> 
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de> 
Message-ID: <200009191428.JAA02596@cj20424-a.reston1.va.home.com>

> I've added support for the sizehint parameter in all places where it
> was missing and the documentation referred to the file objects section
> (socket, StringIO, cStringIO). The only remaining place with a
> readlines function without sizehint is in multifile.py. I'll observe
> that the documentation of this module is quite confused: it mentions a
> str parameter for readline and readlines.

That's one for Fred...

> Should multifile.MultiFile.readlines also support the sizehint? (note
> that read() deliberately does not support a size argument).

I don't care about it here -- that API is clearly substandard.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido at beopen.com  Tue Sep 19 16:33:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 09:33:02 -0500
Subject: [Python-Dev] fileno function in file objects
In-Reply-To: Your message of "Tue, 19 Sep 2000 14:16:29 +0200."
             <200009191216.OAA06594@pandora.informatik.hu-berlin.de> 
References: <200009191216.OAA06594@pandora.informatik.hu-berlin.de> 
Message-ID: <200009191433.JAA02626@cj20424-a.reston1.va.home.com>

> Section 2.1.7.9 of the library reference explains that file objects
> support a fileno method. Is that a mandatory operation on file-like
> objects (e.g. StringIO)? If so, how should it be implemented? If not,
> shouldn't the documentation declare it optional?
> 
> The same question for documented attributes: closed, mode, name,
> softspace: need file-like objects to support them?

fileno() (and isatty()) is OS specific and only works if there *is* an
underlying file number.  It should not be implemented (not even as
raising an exception) if it isn't there.

Support for softspace is needed when you expect to be printing to a
file.

The others are implementation details of the built-in file object, but
would be nice to have if they can be implemented; code that requires
them is not fully supportive of file-like objects.

Note that this (and other, similar issues) is all because Python
doesn't have a standard class hierarchy.  I expect that we'll fix all
this in Python 3000.  Until then, I guess we have to muddle forth...

BTW, did you check in test cases for all the methods you fixed?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From bwarsaw at beopen.com  Tue Sep 19 17:43:15 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 19 Sep 2000 11:43:15 -0400 (EDT)
Subject: [Python-Dev] fileno function in file objects
References: <200009191216.OAA06594@pandora.informatik.hu-berlin.de>
	<200009191433.JAA02626@cj20424-a.reston1.va.home.com>
Message-ID: <14791.35219.817065.241735@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Note that this (and other, similar issues) is all because
    GvR> Python doesn't have a standard class hierarchy.

Or a formal interface mechanism.

-Barry


From bwarsaw at beopen.com  Tue Sep 19 17:43:50 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 19 Sep 2000 11:43:50 -0400 (EDT)
Subject: [Python-Dev] sizehint in readlines
References: <200009191211.OAA06549@pandora.informatik.hu-berlin.de>
	<200009191428.JAA02596@cj20424-a.reston1.va.home.com>
Message-ID: <14791.35254.565129.298375@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    >> Should multifile.MultiFile.readlines also support the sizehint?
    >> (note that read() deliberately does not support a size
    >> argument).

    GvR> I don't care about it here -- that API is clearly
    GvR> substandard.

Indeed!
-Barry


From klm at digicool.com  Tue Sep 19 20:25:04 2000
From: klm at digicool.com (Ken Manheimer)
Date: Tue, 19 Sep 2000 14:25:04 -0400 (EDT)
Subject: [Python-Dev] fileno function in file objects - Interfaces
 Scarecrow
In-Reply-To: <14791.35219.817065.241735@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0009191357370.24497-200000@korak.digicool.com>

Incidentally...

On Tue, 19 Sep 2000, Barry A. Warsaw wrote:

> >>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:
> 
>     GvR> Note that this (and other, similar issues) is all because
>     GvR> Python doesn't have a standard class hierarchy.
> 
> Or a formal interface mechanism.

Incidentally, jim/Zope is going forward with something like the interfaces
strawman - the "scarecrow" - that jim proposed at IPC?7?.  I don't know if
a PEP would have made any sense for 2.x, so maybe it's just as well we
haven't had time.  In the meanwhile, DC will get a chance to get
experience with and refine it... 

Anyway, for anyone that might be interested, i'm attaching a copy of
python/lib/Interfaces/README.txt from a recent Zope2 checkout.  I was
pretty enthusiastic about it when jim originally presented the scarecrow,
and on skimming it now it looks very cool.  (I'm not getting it all on my
quick peruse, and i suspect there's some contortions that wouldn't be
necessary if it were happening more closely coupled with python
development - but what jim sketches out is surprising sleek,
regardless...)

ken
klm at digicool.com
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: README.txt
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000919/d42b6c84/attachment-0001.txt>

From martin at loewis.home.cs.tu-berlin.de  Tue Sep 19 22:48:53 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 19 Sep 2000 22:48:53 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
Message-ID: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de>

> I doubt that we can fix all Unicode related bugs in the 2.0
> stdlib before the final release... let's make this a project 
> for 2.1.

Exactly my feelings. Since we cannot possibly fix all problems, we may
need to change the behaviour later.

If we now silently do the wrong thing, silently changing it to the
then-right thing in 2.1 may break peoples code. So I'm asking that
cases where it does not clearly do the right thing produces an
exception now; we can later fix it to accept more cases, should need
occur.

In the specific case, dropping support for Unicode output in binary
files is the right thing. We don't know what the user expects, so it
is better to produce an exception than to silently put incorrect bytes
into the stream - that is a bug that we still can fix.

The easiest way with the clearest impact is to drop the buffer
interface in unicode objects. Alternatively, not supporting them in
for s# also appears reasonable. Users experiencing the problem in
testing will then need to make an explicit decision how they want to
encode the Unicode objects.

If any expedition of the issue is necessary, I can submit a bug report,
and propose a patch.

Regards,
Martin


From guido at beopen.com  Wed Sep 20 00:00:34 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 17:00:34 -0500
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of unicode - comments please
In-Reply-To: Your message of "Tue, 19 Sep 2000 22:48:53 +0200."
             <200009192048.WAA01414@loewis.home.cs.tu-berlin.de> 
References: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de> 
Message-ID: <200009192200.RAA01853@cj20424-a.reston1.va.home.com>

> > I doubt that we can fix all Unicode related bugs in the 2.0
> > stdlib before the final release... let's make this a project 
> > for 2.1.
> 
> Exactly my feelings. Since we cannot possibly fix all problems, we may
> need to change the behaviour later.
> 
> If we now silently do the wrong thing, silently changing it to the
> then-right thing in 2.1 may break peoples code. So I'm asking that
> cases where it does not clearly do the right thing produces an
> exception now; we can later fix it to accept more cases, should need
> occur.
> 
> In the specific case, dropping support for Unicode output in binary
> files is the right thing. We don't know what the user expects, so it
> is better to produce an exception than to silently put incorrect bytes
> into the stream - that is a bug that we still can fix.
> 
> The easiest way with the clearest impact is to drop the buffer
> interface in unicode objects. Alternatively, not supporting them in
> for s# also appears reasonable. Users experiencing the problem in
> testing will then need to make an explicit decision how they want to
> encode the Unicode objects.
> 
> If any expedition of the issue is necessary, I can submit a bug report,
> and propose a patch.

Sounds reasonable to me (but I haven't thought of all the issues).

For writing binary Unicode strings, one can use

  f.write(u.encode("utf-16"))		# Adds byte order mark
  f.write(u.encode("utf-16-be"))	# Big-endian
  f.write(u.encode("utf-16-le"))	# Little-endian

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal at lemburg.com  Tue Sep 19 23:29:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 19 Sep 2000 23:29:06 +0200
Subject: [Python-Dev] Python 1.5.2 modules need porting to 2.0 because of 
 unicode - comments please
References: <200009192048.WAA01414@loewis.home.cs.tu-berlin.de> <200009192200.RAA01853@cj20424-a.reston1.va.home.com>
Message-ID: <39C7DAA2.A04E5008@lemburg.com>

Guido van Rossum wrote:
> 
> > > I doubt that we can fix all Unicode related bugs in the 2.0
> > > stdlib before the final release... let's make this a project
> > > for 2.1.
> >
> > Exactly my feelings. Since we cannot possibly fix all problems, we may
> > need to change the behaviour later.
> >
> > If we now silently do the wrong thing, silently changing it to the
> > then-right thing in 2.1 may break peoples code. So I'm asking that
> > cases where it does not clearly do the right thing produces an
> > exception now; we can later fix it to accept more cases, should need
> > occur.
> >
> > In the specific case, dropping support for Unicode output in binary
> > files is the right thing. We don't know what the user expects, so it
> > is better to produce an exception than to silently put incorrect bytes
> > into the stream - that is a bug that we still can fix.
> >
> > The easiest way with the clearest impact is to drop the buffer
> > interface in unicode objects. Alternatively, not supporting them in
> > for s# also appears reasonable. Users experiencing the problem in
> > testing will then need to make an explicit decision how they want to
> > encode the Unicode objects.
> >
> > If any expedition of the issue is necessary, I can submit a bug report,
> > and propose a patch.
> 
> Sounds reasonable to me (but I haven't thought of all the issues).
> 
> For writing binary Unicode strings, one can use
> 
>   f.write(u.encode("utf-16"))           # Adds byte order mark
>   f.write(u.encode("utf-16-be"))        # Big-endian
>   f.write(u.encode("utf-16-le"))        # Little-endian

Right.

Possible ways to fix this:

1. disable Unicode's getreadbuf slot

   This would effectively make Unicode object unusable for
   all APIs which use "s#"... and probably give people a lot
   of headaches. OTOH, it would probably motivate lots of
   users to submit patches for the stdlib which makes it
   Unicode aware (hopefully ;-)

2. same as 1., but also make "s#" fall back to getcharbuf
   in case getreadbuf is not defined

   This would make Unicode objects compatible with "s#", but
   still prevent writing of binary data: getcharbuf returns
   the Unicode object encoded using the default encoding which
   is ASCII per default.

3. special case "s#" in some way to handle Unicode or to
   raise an exception pointing explicitly to the problem
   and its (possible) solution

I'm not sure which of these paths to take. Perhaps solution
2. is the most feasable compromise between "exceptions everywhere"
and "encoding confusion".

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido at beopen.com  Wed Sep 20 00:47:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 19 Sep 2000 17:47:11 -0500
Subject: [Python-Dev] Missing API in re modle
Message-ID: <200009192247.RAA02122@cj20424-a.reston1.va.home.com>

When investigating and fixing Tim's report that the Replace dialog in
IDLE was broken, I realized that there's an API missing from the re
module.

For search-and-replace, IDLE uses a regular expression to find the
next match, and then needs to do whatever sub() does to that match.
But there's no API to spell "whatever sub() does"!  It's not safe to
call sub() on just the matching substring -- the match might depend on
context.

It seems that a new API is needed.  I propose to add the following
method of match objects:

  match.expand(repl)

    Return the string obtained by doing backslash substitution as for
    the sub() method in the replacement string: expansion of \n ->
    linefeed etc., and expansion of numeric backreferences (\1, \2,
    ...) and named backreferences (\g<1>, \g<name>, etc.);
    backreferences refer to groups in the match object.

Or am I missing something and is there already a way to do this?

(Side note: the SRE code does some kind of compilation on the
replacement template; I'd like to see this cached, as otherwise IDLE's
replace-all button will take forever...)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas at xs4all.net  Wed Sep 20 15:23:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 20 Sep 2000 15:23:10 +0200
Subject: [Python-Dev] problems importing _tkinter on Linux build
In-Reply-To: <20000917144614.A25718@ActiveState.com>; from trentm@ActiveState.com on Sun, Sep 17, 2000 at 02:46:14PM -0700
References: <20000917142718.A25180@ActiveState.com> <20000917144614.A25718@ActiveState.com>
Message-ID: <20000920152309.A6675@xs4all.nl>

On Sun, Sep 17, 2000 at 02:46:14PM -0700, Trent Mick wrote:
> On Sun, Sep 17, 2000 at 02:27:18PM -0700, Trent Mick wrote:
> > 
> > I get the following error trying to import _tkinter in a Python 2.0 build:
> > 
> > > ./python
> > ./python: error in loading shared libraries: libtk8.3.so: cannot open shared object file: No such file or directory
> > 

> Duh, learning about LD_LIBRARY_PATH (set LD_LIBRARY_PATH to /usr/local/lib)
> and everything is hunky dory. I presumed that /usr/local/lib would be
> on the default search path for shared libraries. Bad assumption I guess.

On *some* ELF systems (at least Linux and BSDI) you can add /usr/local/lib
to /etc/ld.so.conf and rerun 'ldconfig' (which builds the cachefile
/etc/ld.so.cache, which is used as the 'searchpath'.) I personally find this
a lot better approach than the LD_LIBRARY_PATH or -R/-rpath approaches,
especially for 'system-wide' shared libraries (you can use one of the other
approaches if you want to tie a specific binary to a specific shared library
in a specific directory, or have a binary use a different shared library
(from a different directory) in some of the cases -- though you can use
LD_PRELOAD and such for that as well.)

If you tie your binary to a specific directory, you might lose portability,
necessitating ugly script-hacks that find & set a proper LD_LIBRARY_PATH or
LD_PRELOAD and such before calling the real program. I'm not sure if recent
SunOS's support something like ld.so.conf, but old ones didn't, and I sure
wish they did ;)

Back-from-vacation-and-trying-to-catch-up-on-2000+-mails-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal at lemburg.com  Wed Sep 20 16:22:44 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 20 Sep 2000 16:22:44 +0200
Subject: [Python-Dev] Python syntax checker ?
Message-ID: <39C8C834.5E3B90E7@lemburg.com>

Would it be possible to write a Python syntax checker that doesn't
stop processing at the first error it finds but instead tries
to continue as far as possible (much like make -k) ?

If yes, could the existing Python parser/compiler be reused for
such a tool ?

I was asked to write a tool which checks Python code and returns
a list of found errors (syntax error and possibly even some
lint warnings) instead of stopping at the first error it finds.

Thanks for any tips,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From loewis at informatik.hu-berlin.de  Wed Sep 20 19:07:06 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Wed, 20 Sep 2000 19:07:06 +0200 (MET DST)
Subject: [Python-Dev] Python syntax checker ?
Message-ID: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>

> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries to
> continue as far as possible (much like make -k) ?

In "Compilerbau", this is referred to as "Fehlerstabilisierung". I
suggest to have a look at the dragon book (Aho, Seti, Ullman).

The common approch is to insert or remove tokens, using some
heuristics. In YACC, it is possible to add error productions to the
grammar. Whenever an error occurs, the parser assigns all tokens to
the "error" non-terminal until it concludes that it can perform a
reduce action.

A similar approach might work for the Python Grammar. For each
production, you'd define a set of stabilization tokens. If these are
encountered, then the rule would be considered complete. Everything is
consumed until a stabilization token is found.

For example, all expressions could be stabilized with a
keyword. I.e. if you encounter a syntax error inside an expression,
you ignore all tokens until you see 'print', 'def', 'while', etc.

In some cases, it may be better to add input rather than removing
it. For example, if you get an "inconsistent dedent" error, you could
assume that this really was a consistent dedent, or you could assume
it was not meant as a dedent at all. Likewise, if you get a
single-quote start-of-string, with no single-quote until end-of-line,
you just should assume there was one.

Adding error productions to ignore input until stabilization may be
feasible on top of the existing parser. Adding tokens in the right
place is probably harder - I'd personally go for a pure Python
solution, that operates on Grammar/Grammar.

Regards,
Martin



From tismer at appliedbiometrics.com  Wed Sep 20 18:35:50 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Wed, 20 Sep 2000 19:35:50 +0300
Subject: [Python-Dev] 2.0 Optimization & speed
References: <200009082048.WAA14671@python.inrialpes.fr> <39B951CC.3C0AE801@lemburg.com>
Message-ID: <39C8E766.18D9BDD8@appliedbiometrics.com>


"M.-A. Lemburg" wrote:
> 
> Vladimir Marangozov wrote:
> >
> > M.-A. Lemburg wrote:
> > >
> > > Fredrik Lundh wrote:
> > > >
> > > > mal wrote:

...

> > Hey Marc-Andre, don't try to reduce /F's crunching efforts to dust.
> 
> Oh, I didn't try to reduce Fredrik's efforts at all. To the
> contrary: I'm still looking forward to his melted down version
> of the database and the ctype tables.

Howdy. It may be that not you but I will melt /F's efforts
to dust, since I might have one or two days of time
to finish my long time ago promised code generator :-)
Well, probably just merging our dust :-)

> > Every bit costs money, and that's why
> > Van Jacobson packet-header compression has been invented and is
> > massively used. Whole armies of researchers are currently trying to
> > compensate the irresponsible bloatware that people of the higher
> > layers are imposing on them <wink>. Careful!
> 
> True, but why the hurry ?

I have no reason to complain since I didn't do my homework.
Anyway, a partially bloated distribution might be harmful
for Python's reputation. When looking through the whole
source set, there is no bloat anywhere. Everything is
well thought out, and fairly optimized between space and speed.
Well, there is this one module which cries for being replaced,
and which still prevents *me* from moving to Python 1.6 :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com


From martin at loewis.home.cs.tu-berlin.de  Wed Sep 20 21:22:24 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 20 Sep 2000 21:22:24 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
Message-ID: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>

I just tried to disable the getreadbufferproc on Unicode objects. Most
of the test suite continues to work. 

test_unicode fails, which is caused by "s#" not working anymore when
in readbuffer_encode when testing the unicode_internal encoding. That
could be fixed (*).

More concerning, sre fails when matching a unicode string. sre uses
the getreadbufferproc to get to the internal representation. If it has
sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
buffer (?!?).

I'm not sure what the right solution would be in this case: I *think*
sre should have more specific knowledge of Unicode objects, so it
should support objects with a buffer interface representing a 1-byte
character string, or Unicode objects. Actually, is there anything
wrong with sre operating on string and unicode objects only? It
requires that the buffer has a single segment, anyway...

Regards,
Martin

(*) The 'internal encoding' function should directly get to the
representation of the unicode object, and readbuffer_encode could
become Python:

def readbuffer_encode(o,errors="strict"):
  b = buffer(o)
  return str(b),len(b)

or be removed altogether, as it would (rightfully) stop working on
unicode objects.


From effbot at telia.com  Wed Sep 20 21:57:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 20 Sep 2000 21:57:16 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>
Message-ID: <021801c0233c$fec04fc0$766940d5@hagrid>

martin wrote:
> More concerning, sre fails when matching a unicode string. sre uses
> the getreadbufferproc to get to the internal representation. If it has
> sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
> buffer (?!?).

...or an integer buffer.

(who says you can only use regular expressions on character
strings? ;-)

> I'm not sure what the right solution would be in this case: I *think*
> sre should have more specific knowledge of Unicode objects, so it
> should support objects with a buffer interface representing a 1-byte
> character string, or Unicode objects. Actually, is there anything
> wrong with sre operating on string and unicode objects only?

let's add a special case for unicode strings.  I'm actually using
the integer buffer support (don't ask), so I'd prefer to leave it
in there.

no time tonight, but I can check in a fix tomorrow.

</F>



From thomas at xs4all.net  Wed Sep 20 22:02:48 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 20 Sep 2000 22:02:48 +0200
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Wed, Sep 20, 2000 at 07:07:06PM +0200
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <20000920220248.E6675@xs4all.nl>

On Wed, Sep 20, 2000 at 07:07:06PM +0200, Martin von Loewis wrote:
> Adding error productions to ignore input until stabilization may be
> feasible on top of the existing parser. Adding tokens in the right
> place is probably harder - I'd personally go for a pure Python
> solution, that operates on Grammar/Grammar.

Don't forget that there are two kinds of SyntaxErrors in Python: those that
are generated by the tokenizer/parser, and those that are actually generated
by the (bytecode-)compiler. (inconsistent indent/dedent errors, incorrect
uses of (augmented) assignment, incorrect placing of particular keywords,
etc, are all generated while actually compiling the code.) Also, in order to
be really useful, the error-indicator would have to be pretty intelligent.
Imagine something like this:

if 1:

     doodle()

    forever()
    and_ever()
    <tons more code using 4-space indent>

With the current interpreter, that would generate a single warning, on the
line below the one that is the actual problem. If you continue searching for
errors, you'll get tons and tons of errors, all because the first line was
indented too far.

An easy way to work around it is probably to consider all tokenizer-errors
and some of the compiler-generated errors (like indent/dedent ones) as
really-fatal errors, and only handle the errors that are likely to managable
errors, skipping over the affected lines or considering them no-ops.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From martin at loewis.home.cs.tu-berlin.de  Wed Sep 20 22:50:30 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 20 Sep 2000 22:50:30 +0200
Subject: [Python-Dev] [ Bug #110676 ] fd.readlines() hangs (via popen3) (PR#385)
Message-ID: <200009202050.WAA02298@loewis.home.cs.tu-berlin.de>

I've closed your report at

http://sourceforge.net/bugs/?func=detailbug&bug_id=110676&group_id=5470

That is a bug in the application code. The slave tries to write 6000
bytes to stderr, and blocks after writing 4096 (number measured on
Linux, more generally, after _PC_PIPE_BUF bytes).  The server starts
reading on stdin, and blocks also, so you get a deadlock.  The proper
solution is to use 

import popen2

r,w,e = popen2.popen3 ( 'python slave.py' ) 
e.readlines() 
r.readlines() 
r.close() 
e.close() 
w.close() 

as the master, and 

import sys,posix 

e = sys.stderr.write 
w = sys.stdout.write 

e(400*'this is a test\n') 
posix.close(2) 
w(400*'this is another test\n') 

as the slave. Notice that stderr must be closed after writing all
data, or readlines won't return. Also notice that posix.close must be
used, as sys.stderr.close() won't close stderr (apparently due to
concerns that assigning to sys.stderr will silently close is, so no
further errors can be printed).

In general, it would be better to use select(2) on the files returned
from popen3, or spread the reading of the individual files onto
several threads.

Regards,
Martin


From MarkH at ActiveState.com  Thu Sep 21 01:37:31 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 21 Sep 2000 10:37:31 +1100
Subject: [Python-Dev] FW: [humorix] Unobfuscated Perl Code Contest
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEFJDKAA.MarkH@ActiveState.com>

And now for something completely different ;-)
--
Unobfuscated Perl Code Contest
September 16, 19100

The Perl Gazette has announced the winners in the First
Annual _Un_obfuscated Perl Code Contest.  First place went
to Edwin Fuller, who submitted this unobfuscated program:

#!/usr/bin/perl
print "Hello world!\n";

"This was definitely a challenging contest," said an
ecstatic Edwin Fuller. "I've never written a Perl program
before that didn't have hundreds of qw( $ @ % & * | ? / \ !
# ~ ) symbols.  I really had to summon all of my
programming skills to produce an unobfuscated program."

The judges in the contest learned that many programmers
don't understand the meaning of 'unobfuscated perl'.  For
instance, one participant sent in this 'Hello world!'
program:

#!/usr/bin/perl
$x='unob';
open OUT, ">$x.c";
print OUT <<HERE_DOC;
#include <stdio.h>
int main(void) { 
 FILE *f=fopen("$x.sh", "w");
 fprintf(f,"echo Hello world!\\n");
 fclose(f);
 system("chmod +x $x.sh");
 system("./$x.sh"); return 0; 
}
HERE_DOC
close OUT;
system("gcc $x.c -o $x && ./$x");

"As an experienced Perl monger," said one of the judges, "I
can instantly tell that this program spits out C source
code that spits out a shell script to print 'Hello
world!'.  But this code certainly does not qualify as
unobfuscated Perl -- I mean, most of it isn't even written
in Perl!"

He added, "Out of all of the entries, only two were
actually unobfuscated perl.  Everything else looked like
line noise -- or worse."

The second place winner, Mrs. Sea Pearl, submitted the
following code:

#!/usr/bin/perl
use strict;
# Do nothing, successfully
exit(0);

"I think everybody missed the entire point of this
contest," ranted one judge.  "Participants were supposed to
produce code that could actually be understood by somebody
other than a ten-year Perl veteran.  Instead, we get an
implementation of a Java Virtual Machine.  And a version of
the Linux kernel ported to Win32 Perl.  Sheesh!"

In response to the news, a rogue group of Perl hackers have
presented a plan to add a "use really_goddamn_strict"
pragma to the language that would enforce readability and
unobfuscation.  With this pragma in force, the Perl
compiler might say:

 Warning: Program contains zero comments.  You've probably
 never seen or used one before; they begin with a #
 symbol.  Please start using them or else a representative
 from the nearest Perl Mongers group will come to your
 house and beat you over the head with a cluestick.

 Warning: Program uses a cute trick at line 125 that might
 make sense in C.  But this isn't C!

 Warning: Code at line 412 indicates that programmer is an
 idiot. Please correct error between chair and monitor.

 Warning: While There's More Than One Way To Do It, your
 method at line 523 is particularly stupid.  Please try
 again.

 Warning: Write-only code detected between lines 612 and
 734. While this code is perfectly legal, you won't have
 any clue what it does in two weeks.  I recommend you start
 over.

 Warning: Code at line 1,024 is indistinguishable from line
 noise or the output of /dev/random

 Warning: Have you ever properly indented a piece of code
 in your entire life?  Evidently not.

 Warning: I think you can come up with a more descriptive
 variable name than "foo" at line 1,523.

 Warning: Programmer attempting to re-invent the wheel at
 line 2,231. There's a function that does the exact same
 thing on CPAN -- and it actually works.

 Warning: Perl tries to make the easy jobs easy without
 making the hard jobs impossible -- but your code at line
 5,123 is trying to make an easy job impossible.  

 Error: Programmer failed to include required string "All
 hail Larry Wall" within program.  Execution aborted due to
 compilation errors.

Of course, convincing programmers to actually use that
pragma is another matter.  "If somebody actually wanted to
write readable code, why would they use Perl?  Let 'em use
Python!" exclaimed one Usenet regular.  "So this pragma is
a waste of electrons, just like use strict and the -w
command line parameter."

-
Humorix:      Linux and Open Source(nontm) on a lighter note
Archive:      http://humbolt.nl.linux.org/lists/
Web site:     http://www.i-want-a-website.com/about-linux/

----- End forwarded message -----



From bwarsaw at beopen.com  Thu Sep 21 02:02:22 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 20 Sep 2000 20:02:22 -0400 (EDT)
Subject: [Python-Dev] forwarded message from noreply@sourceforge.net
Message-ID: <14793.20494.375237.320590@anthem.concentric.net>


For those of you who may not have received this message, please be
aware that SourceForge will have scheduled downtime this Friday night
until Saturday morning.

-Barry

-------------- next part --------------
An embedded message was scrubbed...
From: noreply at sourceforge.net
Subject: SourceForge:  Important Site News
Date: Tue, 12 Sep 2000 19:58:47 -0700
Size: 2802
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000920/fe211637/attachment-0001.eml>

From tim_one at email.msn.com  Thu Sep 21 02:19:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 20 Sep 2000 20:19:41 -0400
Subject: [Python-Dev] forwarded message from noreply@sourceforge.net
In-Reply-To: <14793.20494.375237.320590@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com>

[Barry A. Warsaw]
> For those of you who may not have received this message, please be
> aware that SourceForge will have scheduled downtime this Friday night
> until Saturday morning.

... This move will take place on Friday night( Sept 22nd) at 10pm and
    continue to 8am Saturday morning (Pacific Standard Time).  During
    this time the site will be off-line as we make the physical change.

Looks to me like they started 30 hours early!  SF has been down more than up
all day, by my account.

So, for recreation in our idly desperate moments, let me recommend a quick
read, and especially to our friends at BeOpen, ActiveState and Secret Labs:

    http://linuxtoday.com/news_story.php3?ltsn=2000-09-20-006-21-OP-BZ-LF
    "Savor the Unmarketed Moment"
    "Marketers are drawn to money as surely as maggots were drawn
    to aforementioned raccoon ...
    The Bazaar is about to be blanketed with smog emitted by the
    Cathedral's smokestacks.  Nobody will be prevented from doing
    whatever he or she was doing before, but the oxygen level will
    be dropping and visibility will be impaired."

gasping-a-bit-from-the-branding-haze-himself<0.5-wink>-ly y'rs  - tim




From guido at beopen.com  Thu Sep 21 03:57:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 20 Sep 2000 20:57:39 -0500
Subject: [Python-Dev] SourceForge downtime postponed
In-Reply-To: Your message of "Wed, 20 Sep 2000 20:19:41 -0400."
             <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCMECNHHAA.tim_one@email.msn.com> 
Message-ID: <200009210157.UAA05881@cj20424-a.reston1.va.home.com>

> Looks to me like they started 30 hours early!  SF has been down more than up
> all day, by my account.

Actually, they're back in business, and they impoved the Bugs manager!
(E.g. there are now group management facilities on the fromt page.)

They also mailed around today that the move won't be until mid
October.  That's good, insofar that it doesn't take SF away from us
while we're in the heat of the 2nd beta release!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido at beopen.com  Thu Sep 21 04:17:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 20 Sep 2000 21:17:20 -0500
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: Your message of "Wed, 20 Sep 2000 16:22:44 +0200."
             <39C8C834.5E3B90E7@lemburg.com> 
References: <39C8C834.5E3B90E7@lemburg.com> 
Message-ID: <200009210217.VAA06180@cj20424-a.reston1.va.home.com>

> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries
> to continue as far as possible (much like make -k) ?
> 
> If yes, could the existing Python parser/compiler be reused for
> such a tool ?
> 
> I was asked to write a tool which checks Python code and returns
> a list of found errors (syntax error and possibly even some
> lint warnings) instead of stopping at the first error it finds.

I had some ideas for this in the context of CP4E, and I even tried to
implement some, but didn['t get far enough to check it in anywhere.
Then I lost track of the code in the BeOpen move.  (It wasn't very
much.)

I used a completely different approach to parsing: look at the code
from the outside in, e.g. when you see

  def foo(a,b,c):
      print a
      for i in range(b):
          while x:
              print v
      else:
          bah()

you first notice that there's a line starting with a 'def' keyword
followed by some indented stuff; then you notice that the indented
stuff is a line starting with 'print', a line starting with 'for'
followed by more indented stuff, and a line starting with 'else' and
more indented stuff; etc.

This requires tokenization to succeed -- you need to know what are
continuation lines, and what are strings and comments, before you can
parse the rest; but I believe it can be made successful in the light
of quite severe problems.

(No time to elaborate. :-( )

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Thu Sep 21 12:32:23 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:32:23 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <39C9E3B7.5F9BFC01@lemburg.com>

Martin von Loewis wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries to
> > continue as far as possible (much like make -k) ?
> 
> In "Compilerbau", this is referred to as "Fehlerstabilisierung". I
> suggest to have a look at the dragon book (Aho, Seti, Ullman).
> 
> The common approch is to insert or remove tokens, using some
> heuristics. In YACC, it is possible to add error productions to the
> grammar. Whenever an error occurs, the parser assigns all tokens to
> the "error" non-terminal until it concludes that it can perform a
> reduce action.
> 
> A similar approach might work for the Python Grammar. For each
> production, you'd define a set of stabilization tokens. If these are
> encountered, then the rule would be considered complete. Everything is
> consumed until a stabilization token is found.
> 
> For example, all expressions could be stabilized with a
> keyword. I.e. if you encounter a syntax error inside an expression,
> you ignore all tokens until you see 'print', 'def', 'while', etc.
> 
> In some cases, it may be better to add input rather than removing
> it. For example, if you get an "inconsistent dedent" error, you could
> assume that this really was a consistent dedent, or you could assume
> it was not meant as a dedent at all. Likewise, if you get a
> single-quote start-of-string, with no single-quote until end-of-line,
> you just should assume there was one.
> 
> Adding error productions to ignore input until stabilization may be
> feasible on top of the existing parser. Adding tokens in the right
> place is probably harder - I'd personally go for a pure Python
> solution, that operates on Grammar/Grammar.

I think I'd prefer a Python solution too -- perhaps I could
start out with tokenizer.py and muddle along that way. pylint
from Aaron Waters should also provide some inspiration.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Thu Sep 21 12:42:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:42:46 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <39C8C834.5E3B90E7@lemburg.com> <200009210217.VAA06180@cj20424-a.reston1.va.home.com>
Message-ID: <39C9E626.6CF85658@lemburg.com>

Guido van Rossum wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries
> > to continue as far as possible (much like make -k) ?
> >
> > If yes, could the existing Python parser/compiler be reused for
> > such a tool ?
> >
> > I was asked to write a tool which checks Python code and returns
> > a list of found errors (syntax error and possibly even some
> > lint warnings) instead of stopping at the first error it finds.
> 
> I had some ideas for this in the context of CP4E, and I even tried to
> implement some, but didn['t get far enough to check it in anywhere.
> Then I lost track of the code in the BeOpen move.  (It wasn't very
> much.)
> 
> I used a completely different approach to parsing: look at the code
> from the outside in, e.g. when you see
> 
>   def foo(a,b,c):
>       print a
>       for i in range(b):
>           while x:
>               print v
>       else:
>           bah()
> 
> you first notice that there's a line starting with a 'def' keyword
> followed by some indented stuff; then you notice that the indented
> stuff is a line starting with 'print', a line starting with 'for'
> followed by more indented stuff, and a line starting with 'else' and
> more indented stuff; etc.

This is similar to my initial idea: syntax checking should continue
(or possibly restart) at the next found "block" after an error.

E.g. in Thomas' case:

if 1:

     doodle()

    forever()
    and_ever()
    <tons more code using 4-space indent>

the checker should continue at forever() possibly by restarting
checking at that line.

> This requires tokenization to succeed -- you need to know what are
> continuation lines, and what are strings and comments, before you can
> parse the rest; but I believe it can be made successful in the light
> of quite severe problems.

Looks like this is highly non-trivial job...

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Thu Sep 21 12:58:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 12:58:57 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de>
Message-ID: <39C9E9F1.81C50A35@lemburg.com>

"Martin v. Loewis" wrote:
> 
> I just tried to disable the getreadbufferproc on Unicode objects. Most
> of the test suite continues to work.

Martin, haven't you read my last post to Guido ? 

Completely disabling getreadbuf is not a solution worth considering --
it breaks far too much code which the test suite doesn't even test,
e.g. MarkH's win32 stuff produces tons of Unicode object which
then can get passed to potentially all of the stdlib. The test suite
doesn't check these cases.
 
Here's another possible solution to the problem:

    Special case Unicode in getargs.c's code for "s#" only and leave
    getreadbuf enabled. "s#" could then return the default encoded
    value for the Unicode object while SRE et al. could still use 
    PyObject_AsReadBuffer() to get at the raw data.

> test_unicode fails, which is caused by "s#" not working anymore when
> in readbuffer_encode when testing the unicode_internal encoding. That
> could be fixed (*).

True. It currently relies on the fact the "s#" returns the internal
raw data representation for Unicode.
 
> More concerning, sre fails when matching a unicode string. sre uses
> the getreadbufferproc to get to the internal representation. If it has
> sizeof(Py_UNICODE) times as many bytes as it is long, we got a unicode
> buffer (?!?).
> 
> I'm not sure what the right solution would be in this case: I *think*
> sre should have more specific knowledge of Unicode objects, so it
> should support objects with a buffer interface representing a 1-byte
> character string, or Unicode objects. Actually, is there anything
> wrong with sre operating on string and unicode objects only? It
> requires that the buffer has a single segment, anyway...

Ouch... but then again, it's a (documented ?) feature of re and
sre that they work on getreadbuf compatible objects, e.g.
mmap'ed files, so they'll have to use "s#" for accessing the
data.

Of course, with the above solution, SRE could use the 
PyObject_AsReadBuffer() API to get at the binary data.
 
> Regards,
> Martin
> 
> (*) The 'internal encoding' function should directly get to the
> representation of the unicode object, and readbuffer_encode could
> become Python:
> 
> def readbuffer_encode(o,errors="strict"):
>   b = buffer(o)
>   return str(b),len(b)
> 
> or be removed altogether, as it would (rightfully) stop working on
> unicode objects.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy at beopen.com  Thu Sep 21 16:58:54 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 21 Sep 2000 10:58:54 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/xml/sax __init__.py,1.6,1.7
In-Reply-To: <200009211447.HAA02917@slayer.i.sourceforge.net>
References: <200009211447.HAA02917@slayer.i.sourceforge.net>
Message-ID: <14794.8750.83880.932497@bitdiddle.concentric.net>

Lars,

I just fixed the last set of checkins you made to the xml package.
You left the system in a state where test_minidom failed.  When part
of the regression test fails, it causes severe problems for all other
developers.  They have no way to know if the change they've just made
to the tuple object (for example) causes the failure or not.  Thus, it
is essential that the CVS repository never be in a state where the
regression tests fail.

You're kind of new around here, so I'll let you off with a warning
<wink>.

Jeremy


From martin at loewis.home.cs.tu-berlin.de  Thu Sep 21 18:19:53 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 21 Sep 2000 18:19:53 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
In-Reply-To: <39C9E9F1.81C50A35@lemburg.com> (mal@lemburg.com)
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>
Message-ID: <200009211619.SAA00737@loewis.home.cs.tu-berlin.de>

> Martin, haven't you read my last post to Guido ? 

I've read

http://www.python.org/pipermail/python-dev/2000-September/016162.html

where you express a preference of disabling the getreadbuf slot, in
addition to special-casing Unicode objects in s#. I've just tested the
effects of your solution 1 on the test suite. Or are you referring to
a different message?

> Completely disabling getreadbuf is not a solution worth considering --
> it breaks far too much code which the test suite doesn't even test,
> e.g. MarkH's win32 stuff produces tons of Unicode object which
> then can get passed to potentially all of the stdlib. The test suite
> doesn't check these cases.

Do you have any specific examples of what else would break? Looking at
all occurences of 's#' in the standard library, I can't find a single
case where the current behaviour would be right - in all cases raising
an exception would be better. Again, any counter-examples?

>     Special case Unicode in getargs.c's code for "s#" only and leave
>     getreadbuf enabled. "s#" could then return the default encoded
>     value for the Unicode object while SRE et al. could still use 
>     PyObject_AsReadBuffer() to get at the raw data.

I think your option 2 is acceptable, although I feel the option 1
would expose more potential problems. What if an application
unknowingly passes a unicode object to md5.update? In testing, it may
always succeed as ASCII-only data is used, and it will suddenly start
breaking when non-ASCII strings are entered by some user. 

Using the internal rep would also be wrong in this case - the md5 hash
would depend on the byte order, which is probably not desired (*).

In any case, your option 2 would be a big improvement over the current
state, so I'll just shut up.

Regards,
Martin

(*) BTW, is there a meaningful way to define md5 for a Unicode string?


From DavidA at ActiveState.com  Thu Sep 21 18:32:30 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 21 Sep 2000 09:32:30 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Unobfuscated Perl Code Contest
Message-ID: <Pine.WNT.4.21.0009210931540.1868-100000@loom>

ObPython at the end...

---da

Unobfuscated Perl Code Contest
September 16, 19100

The Perl Gazette has announced the winners in the First
Annual _Un_obfuscated Perl Code Contest.  First place went
to Edwin Fuller, who submitted this unobfuscated program:

#!/usr/bin/perl
print "Hello world!\n";

"This was definitely a challenging contest," said an
ecstatic Edwin Fuller. "I've never written a Perl program
before that didn't have hundreds of qw( $ @ % & * | ? / \ !
# ~ ) symbols.  I really had to summon all of my
programming skills to produce an unobfuscated program."

The judges in the contest learned that many programmers
don't understand the meaning of 'unobfuscated perl'.  For
instance, one participant sent in this 'Hello world!'
program:

#!/usr/bin/perl
$x='unob';
open OUT, ">$x.c";
print OUT <<HERE_DOC;
#include <stdio.h>
int main(void) { 
 FILE *f=fopen("$x.sh", "w");
 fprintf(f,"echo Hello world!\\n");
 fclose(f);
 system("chmod +x $x.sh");
 system("./$x.sh"); return 0; 
}
HERE_DOC
close OUT;
system("gcc $x.c -o $x && ./$x");

"As an experienced Perl monger," said one of the judges, "I
can instantly tell that this program spits out C source
code that spits out a shell script to print 'Hello
world!'.  But this code certainly does not qualify as
unobfuscated Perl -- I mean, most of it isn't even written
in Perl!"

He added, "Out of all of the entries, only two were
actually unobfuscated perl.  Everything else looked like
line noise -- or worse."

The second place winner, Mrs. Sea Pearl, submitted the
following code:

#!/usr/bin/perl
use strict;
# Do nothing, successfully
exit(0);

"I think everybody missed the entire point of this
contest," ranted one judge.  "Participants were supposed to
produce code that could actually be understood by somebody
other than a ten-year Perl veteran.  Instead, we get an
implementation of a Java Virtual Machine.  And a version of
the Linux kernel ported to Win32 Perl.  Sheesh!"

In response to the news, a rogue group of Perl hackers have
presented a plan to add a "use really_goddamn_strict"
pragma to the language that would enforce readability and
unobfuscation.  With this pragma in force, the Perl
compiler might say:

 Warning: Program contains zero comments.  You've probably
 never seen or used one before; they begin with a #
 symbol.  Please start using them or else a representative
 from the nearest Perl Mongers group will come to your
 house and beat you over the head with a cluestick.

 Warning: Program uses a cute trick at line 125 that might
 make sense in C.  But this isn't C!

 Warning: Code at line 412 indicates that programmer is an
 idiot. Please correct error between chair and monitor.

 Warning: While There's More Than One Way To Do It, your
 method at line 523 is particularly stupid.  Please try
 again.

 Warning: Write-only code detected between lines 612 and
 734. While this code is perfectly legal, you won't have
 any clue what it does in two weeks.  I recommend you start
 over.

 Warning: Code at line 1,024 is indistinguishable from line
 noise or the output of /dev/random

 Warning: Have you ever properly indented a piece of code
 in your entire life?  Evidently not.

 Warning: I think you can come up with a more descriptive
 variable name than "foo" at line 1,523.

 Warning: Programmer attempting to re-invent the wheel at
 line 2,231. There's a function that does the exact same
 thing on CPAN -- and it actually works.

 Warning: Perl tries to make the easy jobs easy without
 making the hard jobs impossible -- but your code at line
 5,123 is trying to make an easy job impossible.  

 Error: Programmer failed to include required string "All
 hail Larry Wall" within program.  Execution aborted due to
 compilation errors.

Of course, convincing programmers to actually use that
pragma is another matter.  "If somebody actually wanted to
write readable code, why would they use Perl?  Let 'em use
Python!" exclaimed one Usenet regular.  "So this pragma is
a waste of electrons, just like use strict and the -w
command line parameter."




From guido at beopen.com  Thu Sep 21 19:44:25 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 21 Sep 2000 12:44:25 -0500
Subject: [Python-Dev] Disabling Unicode readbuffer interface
In-Reply-To: Your message of "Thu, 21 Sep 2000 18:19:53 +0200."
             <200009211619.SAA00737@loewis.home.cs.tu-berlin.de> 
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>  
            <200009211619.SAA00737@loewis.home.cs.tu-berlin.de> 
Message-ID: <200009211744.MAA17168@cj20424-a.reston1.va.home.com>

I haven't researched this to the bottom, but based on the email
exchange, it seems that keeping getreadbuf and special-casing s# for
Unicode objects makes the most sense.  That makes the 's' and 's#'
more similar.  Note that 'z#' should also be fixed.

I believe that SRE uses PyObject_AsReadBuffer() so that it can work
with arrays of shorts as well (when shorts are two chars).  Kind of
cute.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal at lemburg.com  Thu Sep 21 19:16:17 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 19:16:17 +0200
Subject: [Python-Dev] Disabling Unicode readbuffer interface
References: <200009201922.VAA01669@loewis.home.cs.tu-berlin.de> <39C9E9F1.81C50A35@lemburg.com>  
	            <200009211619.SAA00737@loewis.home.cs.tu-berlin.de> <200009211744.MAA17168@cj20424-a.reston1.va.home.com>
Message-ID: <39CA4261.2B586B3F@lemburg.com>

Guido van Rossum wrote:
> 
> I haven't researched this to the bottom, but based on the email
> exchange, it seems that keeping getreadbuf and special-casing s# for
> Unicode objects makes the most sense.  That makes the 's' and 's#'
> more similar.  Note that 'z#' should also be fixed.
> 
> I believe that SRE uses PyObject_AsReadBuffer() so that it can work
> with arrays of shorts as well (when shorts are two chars).  Kind of
> cute.

Ok, I'll check in a patch for special casing Unicode object
in getarg.c's "s#" later today.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal at lemburg.com  Thu Sep 21 23:28:47 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 21 Sep 2000 23:28:47 +0200
Subject: [Python-Dev] Versioning for Python packages
References: <200009192300.RAA01451@localhost.localdomain> <39C87B69.DD0D2DC9@lemburg.com> <200009201507.KAA04851@cj20424-a.reston1.va.home.com>  
	            <39C8CEB5.65A70BBE@lemburg.com> <200009211538.KAA08180@cj20424-a.reston1.va.home.com>
Message-ID: <39CA7D8F.633E74D6@lemburg.com>

[Moved to python-dev from xml-sig]

Guido van Rossum wrote:
> 
> > Perhaps a good start would be using lib/python-2.0.0 as installation
> > target rather than just lib/python2. I'm sure this was discussed
> > before, but given the problems we had with this during the 1.5
> > cycle (with 1.5.2 providing not only patches, but also new
> > features), I think a more fine-grained approach should be
> > considered for future versions.
> 
> We're using lib/python2.0, and we plan not to make major releases with
> a 3rd level version number increment!  SO I think that's not necessary.

Ah, that's good news :-)
 
> > About package versioning: how would the version be specified
> > in imports ?
> >
> > from mx.DateTime(1.4.0) import now
> > from mx(1.0.0).DateTime import now
> > from mx(1.0.0).DateTime(1.4.0) import now
> >
> > The directory layout would then look something like this:
> >
> > mx/
> >       1.0.0/
> >               DateTime/
> >                       1.4.0/
> >
> > Package __path__ hooks could be used to implement the
> > lookup... or of course some new importer.
> >
> > But what happens if there is no (old) version mx-1.0.0 installed ?
> > Should Python then default to mx-1.3.0 which is installed or
> > raise an ImportError ?
> >
> > This sounds like trouble... ;-)
> 
> You've got it.  Please move this to python-dev.  It's good PEP
> material!

Done.
 
> > > > We will have a similar problem with Unicode and the stdlib
> > > > during the Python 2.0 cycle: people will want to use Unicode
> > > > together with the stdlib, yet many modules in the stdlib
> > > > don't support Unicode. To remedy this, users will have to
> > > > patch the stdlib modules and put them somewhere so that they
> > > > can override the original 2.0 ones.
> > >
> > > They can use $PYTHONPATH.
> >
> > True, but why not help them a little by letting site
> > installations override the stdlib ? After all, distutils
> > standard target is site-packages.
> 
> Overrides of the stdlib are dangerous in general and should not be
> encouraged.
> 
> > > > BTW, with distutils coming on strong I don't really see a
> > > > need for any hacks: instead distutils should be given some
> > > > smart logic to do the right thing, ie. it should support
> > > > installing subpackages of a package. If that's not desired,
> > > > then I'd opt for overriding the whole package (without any
> > > > hacks to import the overridden one).
> > >
> > > That's another possibility.  But then distutils will have to become
> > > aware of package versions again.
> >
> > This shouldn't be hard to add to the distutils processing:
> > before starting an installation of a package, the package
> > pre-install hook could check which versions are installed
> > and then decide whether to raise an exception or continue.
> 
> Here's another half-baked idea about versions: perhaps packages could
> have a __version__.py file?

Hmm, I usually put a __version__ attribute right into the
__init__.py file of the package -- why another file ?

I think we should come up with a convention on these
meta-attributes. They are useful for normal modules
as well, e.g. __version__, __copyright__, __author__, etc.

Looks like its PEP-time again ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy at beopen.com  Fri Sep 22 22:29:18 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 22 Sep 2000 16:29:18 -0400 (EDT)
Subject: [Python-Dev] Sunday code freeze
Message-ID: <14795.49438.749774.32159@bitdiddle.concentric.net>

We will need about a day to prepare the 2.0b2 release.  Thus, all
changes need to be committed by the end of the day on Sunday.  A code
freeze will be in effect starting then.

Please try to resolve any patches or bugs assigned to you before the
code freeze.

Jeremy


From thomas at xs4all.net  Sat Sep 23 14:26:51 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 23 Sep 2000 14:26:51 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0042.txt,1.19,1.20
In-Reply-To: <200009230440.VAA11540@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Fri, Sep 22, 2000 at 09:40:47PM -0700
References: <200009230440.VAA11540@slayer.i.sourceforge.net>
Message-ID: <20000923142651.A20757@xs4all.nl>

On Fri, Sep 22, 2000 at 09:40:47PM -0700, Fred L. Drake wrote:

> Modified Files:
> 	pep-0042.txt 
> Log Message:
> 
> Added request for a portable time.strptime() implementation.

As Tim noted, there already was a request for a separate implementation of
strptime(), though slightly differently worded. I've merged them.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one at email.msn.com  Sat Sep 23 22:44:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 23 Sep 2000 16:44:27 -0400
Subject: [Python-Dev] FW: Compiling Python 1.6 under MacOS X ...
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJLHHAA.tim_one@email.msn.com>

FYI.

-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Thelonious Georgia
Sent: Saturday, September 23, 2000 4:05 PM
To: python-list at python.org
Subject: Compiling Python 1.6 under MacOS X ...


Hey all-

I'm trying to get the 1.6 sources to compile under the public beta of MacOS
X. I ran ./configure, then make, and it does a pretty noble job of
compiling, up until I get:

cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
unicodectyc
cc: Internal compiler error: program cpp-precomp got fatal signal 11make[1]:
*** [unicodectype.o] Error 1
make: *** [Objects] Error 2
[dhcppc4:~/Python-1.6] root#

cc -v returns:
Reading specs from /usr/libexec/ppc/2.95.2/specs
Apple Computer, Inc. version cc-796.3, based on gcc driver version 2.7.2.1
exec2

I have searched high and low, but can find no mention of this particular
error (which makes sense, sure, because of how long the beta has been out),
but any help in getting around this particular error would be appreciated.

Theo


--
http://www.python.org/mailman/listinfo/python-list




From tim_one at email.msn.com  Sun Sep 24 01:31:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 23 Sep 2000 19:31:41 -0400
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com>

Dan, anyone can mail to python-dev at python.org.

Everyone else, this appears to be a followup on the Mac OSX compiler error.

Dan, I replied to that on comp.lang.python; if you have bugs to report
(platform-specific or otherwise) against the current CVS tree, SourceForge
is the best place to do it.  Since the 1.6 release is history, it's too late
to change anything there.

-----Original Message-----
From: Dan Wolfe [mailto:dkwolfe at pacbell.net]
Sent: Saturday, September 23, 2000 5:35 PM
To: tim_one at email.msn.com
Subject: regarding the Python Developer posting...


Howdy Tim,

I can't send to the development list so your gonna have to suffer... ;-)

With regards to:

<http://www.python.org/pipermail/python-dev/2000-September/016188.html>

>cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
>unicodectyc
>cc: Internal compiler error: program cpp-precomp got fatal signal
11make[1]:
>*** [unicodectype.o] Error 1
>make: *** [Objects] Error 2
>dhcppc4:~/Python-1.6] root#

I believe it's a bug in the cpp pre-comp as it also appears under 2.0.
I've been able to work around it by passing -traditional-cpp to the
compiler and it doesn't complain... ;-)  I'll take it up with Stan Steb
(the compiler guy) when I go into work on Monday.

Now if I can just figure out the test_sre.py, I'll be happy. (eg it
compiles and runs but is still not passing all the regression tests).

- Dan




From gvwilson at nevex.com  Sun Sep 24 16:26:37 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Sun, 24 Sep 2000 10:26:37 -0400 (EDT)
Subject: [Python-Dev] serializing Python as XML
Message-ID: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>

Hi, everyone.  One of the Software Carpentry designers has asked whether a
package exists to serialize Python data structures as XML, so that lists
of dictionaries of tuples of etc. can be exchanged with other XML-aware
tools.  Does this exist, even in pre-release form?  If not, I'd like to
hear from anyone who's already done any thinking in this direction.

Thanks,
Greg

p.s. has there ever been discussion about adding an '__xml__' method to
Python to augment the '__repr__' and '__str__' methods?





From fdrake at beopen.com  Sun Sep 24 16:27:55 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sun, 24 Sep 2000 10:27:55 -0400 (EDT)
Subject: [Python-Dev] serializing Python as XML
In-Reply-To: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
References: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
Message-ID: <14798.3947.965595.628569@cj42289-a.reston1.va.home.com>

Greg Wilson writes:
 > Hi, everyone.  One of the Software Carpentry designers has asked whether a
 > package exists to serialize Python data structures as XML, so that lists
 > of dictionaries of tuples of etc. can be exchanged with other XML-aware
 > tools.  Does this exist, even in pre-release form?  If not, I'd like to
 > hear from anyone who's already done any thinking in this direction.

  There are at least two implementations; I'm not sure of their exact
status.
  The PyXML contains something called xml.marshal, written by Andrew
Kuchling.  I've also seen something called Python xml_objectify (I
think) announced on Freshmeat.net.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gvwilson at nevex.com  Sun Sep 24 17:00:03 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Sun, 24 Sep 2000 11:00:03 -0400 (EDT)
Subject: [Python-Dev] installer difficulties
Message-ID: <Pine.LNX.4.10.10009241056300.14730-100000@akbar.nevex.com>

I just ran the "uninstall" that comes with BeOpen-Python-2.0b1.exe (the
September 8 version), then re-ran the installer.  A little dialog came up
saying "Corrupt installation detected", and the installer exits. Deleted
all of my g:\python2.0 files, all the registry entries, etc. --- same
behavior.

1. What is it looking at to determine whether the installation is corrupt?
   The installer itself, or my hard drive?  (If the former, my copy of the
   downloaded installer is 5,970,597 bytes long.)

2. What's the fix?

Thanks,
Greg





From skip at mojam.com  Sun Sep 24 17:19:10 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sun, 24 Sep 2000 10:19:10 -0500 (CDT)
Subject: [Python-Dev] serializing Python as XML
In-Reply-To: <14798.3947.965595.628569@cj42289-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10009241022590.14730-100000@akbar.nevex.com>
	<14798.3947.965595.628569@cj42289-a.reston1.va.home.com>
Message-ID: <14798.7022.727038.770709@beluga.mojam.com>

    >> Hi, everyone.  One of the Software Carpentry designers has asked
    >> whether a package exists to serialize Python data structures as XML,
    >> so that lists of dictionaries of tuples of etc. can be exchanged with
    >> other XML-aware tools.

    Fred> There are at least two implementations ... PyXML & xml_objectify 

You can also use XML-RPC (http://www.xmlrpc.com/) or SOAP
(http://www.develop.com/SOAP/).  In Fredrik Lundh's xmlrpclib library
(http://www.pythonware.com/products/xmlrpc/) you can access the dump and
load functions without actually using the rest of the protocol if you like.
I suspect there are similar hooks in soaplib
(http://www.pythonware.com/products/soap/).

-- 
Skip Montanaro (skip at mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/



From tim_one at email.msn.com  Sun Sep 24 19:55:15 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 13:55:15 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <Pine.LNX.4.10.10009241056300.14730-100000@akbar.nevex.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELDHHAA.tim_one@email.msn.com>

[posted & mailed]

[Greg Wilson]
> I just ran the "uninstall" that comes with BeOpen-Python-2.0b1.exe (the
> September 8 version), then re-ran the installer.  A little dialog came up
> saying "Corrupt installation detected", and the installer exits. Deleted
> all of my g:\python2.0 files, all the registry entries, etc. --- same
> behavior.
>
> 1. What is it looking at to determine whether the installation is
>    corrupt?

While I built the installer, I have no idea!  It's an internal function of
the Wise software, and-- you guessed it <wink> --that's closed-source.  I
*believe* it's failing an internal consistency check, and that's all.

>    The installer itself, or my hard drive?  (If the former, my copy
>    of the downloaded installer is 5,970,597 bytes long.)

That is the correct size.

> 2. What's the fix?

Dunno.  It's a new one on me, and I uninstall and reinstall many times each
week.  Related things occasionally pop up on Python-Help, and is usually
fixed there by asking the victim to try downloading again with some other
program (Netscape instead of IE, or vice versa, or FTP, or GetRight, ...).

Here's a better check, provided you have *some* version of Python sitting
around:

>>> path = "/updates/BeOpen-Python-2.0b1.exe" # change accordingly
>>> import os
>>> os.path.getsize(path)
5970597
>>> guts = open(path, "rb").read()
>>> len(guts)
5970597
>>> import sha
>>> print sha.new(guts).hexdigest()
ef495d351a93d887f5df6b399747d4e96388b0d5
>>>

If you don't get the same SHA digest, it is indeed corrupt despite having
the correct size.  Let us know!





From martin at loewis.home.cs.tu-berlin.de  Sun Sep 24 19:56:04 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 24 Sep 2000 19:56:04 +0200
Subject: [Python-Dev] serializing Python as XML
Message-ID: <200009241756.TAA00735@loewis.home.cs.tu-berlin.de>

> whether a package exists to serialize Python data structures as XML,

Zope has a variant of pickle where pickles follow an XML DTD (i.e. it
pickles into XML). I believe the current implementation first pickles
into an ASCII pickle and reformats that as XML afterwards, but that is
an implementation issue.

> so that lists of dictionaries of tuples of etc. can be exchanged
> with other XML-aware tools.

See, this is one of the common XML pitfalls. Even though the output of
that is well-formed XML, and even though there is an imaginary DTD (*)
which this XML could be validated against: it is still unlikely that
other XML-aware tools could make much use of the format, at least if
the original Python contained some "interesting" objects
(e.g. instance objects). Even with only dictionaries of tuples: The
Zope DTD supports cyclic structures; it would not be straight-forward
to support the back-referencing in structure in some other tool
(although certainly possible).

XML alone does not give interoperability. You need some agreed-upon
DTD for that. If that other XML-aware tool is willing to adopt to a
Python-provided DTD - why couldn't it read Python pickles in the first
place?

Regards,
Martin

(*) There have been repeated promises of actually writing down the DTD
some day.



From tim_one at email.msn.com  Sun Sep 24 20:47:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 14:47:11 -0400
Subject: [Python-Dev] How about braindead Unicode "compression"?
Message-ID: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>

unicodedatabase.c has 64K lines of the form:

/* U+009a */ { 13, 0, 15, 0, 0 },

Each struct getting initialized there takes 8 bytes on most machines (4
unsigned chars + a char*).

However, there are only 3,567 unique structs (54,919 of them are all 0's!).
So a braindead-easy mechanical "compression" scheme would simply be to
create one vector with the 3,567 unique structs, and replace the 64K record
constructors with 2-byte indices into that vector.  Data size goes down from

    64K * 8b = 512Kb

to

    3567 * 8b + 64K * 2b ~= 156Kb

at once; the source-code transformation is easy to do via a Python program;
the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
can go away; and one indirection is added to access (which remains utterly
uniform).

Previous objections to compression were, as far as I could tell, based on
fear of elaborate schemes that rendered the code unreadable and the access
code excruciating.  But if we can get more than a factor of 3 with little
work and one new uniform indirection, do people still object?

If nobody objects by the end of today, I intend to do it.





From tim_one at email.msn.com  Sun Sep 24 22:26:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 16:26:40 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELDHHAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com>

[Tim]
> ...
> Here's a better check, provided you have *some* version of Python sitting
> around:
>
> >>> path = "/updates/BeOpen-Python-2.0b1.exe" # change accordingly
> >>> import os
> >>> os.path.getsize(path)
> 5970597
> >>> guts = open(path, "rb").read()
> >>> len(guts)
> 5970597
> >>> import sha
> >>> print sha.new(guts).hexdigest()
> ef495d351a93d887f5df6b399747d4e96388b0d5
> >>>
>
> If you don't get the same SHA digest, it is indeed corrupt despite having
> the correct size.  Let us know!

Greg reports getting

  e65aac55368b823e1c0bc30c0a5bc4dd2da2adb4

Someone else care to try this?  I tried it both on the original installer I
uploaded to BeOpen, and on the copy I downloaded back from the pythonlabs
download page right after Fred updated it.  At this point I don't know
whether BeOpen's disk is corrupted, or Greg's, or sha has a bug, or ...





From guido at beopen.com  Sun Sep 24 23:47:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 16:47:52 -0500
Subject: [Python-Dev] How about braindead Unicode "compression"?
In-Reply-To: Your message of "Sun, 24 Sep 2000 14:47:11 -0400."
             <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com> 
Message-ID: <200009242147.QAA06557@cj20424-a.reston1.va.home.com>

> unicodedatabase.c has 64K lines of the form:
> 
> /* U+009a */ { 13, 0, 15, 0, 0 },
> 
> Each struct getting initialized there takes 8 bytes on most machines (4
> unsigned chars + a char*).
> 
> However, there are only 3,567 unique structs (54,919 of them are all 0's!).
> So a braindead-easy mechanical "compression" scheme would simply be to
> create one vector with the 3,567 unique structs, and replace the 64K record
> constructors with 2-byte indices into that vector.  Data size goes down from
> 
>     64K * 8b = 512Kb
> 
> to
> 
>     3567 * 8b + 64K * 2b ~= 156Kb
> 
> at once; the source-code transformation is easy to do via a Python program;
> the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
> can go away; and one indirection is added to access (which remains utterly
> uniform).
> 
> Previous objections to compression were, as far as I could tell, based on
> fear of elaborate schemes that rendered the code unreadable and the access
> code excruciating.  But if we can get more than a factor of 3 with little
> work and one new uniform indirection, do people still object?
> 
> If nobody objects by the end of today, I intend to do it.

Go for it!  I recall seeing that file and thinking the same thing.

(Isn't the VC++ compiler warning about line numbers > 64K?  Then you'd
have to put two pointers on one line to make it go away, regardless of
the size of the generated object code.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sun Sep 24 23:58:53 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 16:58:53 -0500
Subject: [Python-Dev] installer difficulties
In-Reply-To: Your message of "Sun, 24 Sep 2000 16:26:40 -0400."
             <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCEELKHHAA.tim_one@email.msn.com> 
Message-ID: <200009242158.QAA06679@cj20424-a.reston1.va.home.com>

>   e65aac55368b823e1c0bc30c0a5bc4dd2da2adb4
> 
> Someone else care to try this?  I tried it both on the original installer I
> uploaded to BeOpen, and on the copy I downloaded back from the pythonlabs
> download page right after Fred updated it.  At this point I don't know
> whether BeOpen's disk is corrupted, or Greg's, or sha has a bug, or ...

I just downloaded it again and tried your code, and got the same value
as Greg!  I also get Greg's error on Windows with the newly downloaded
version.

Conclusion: the new Zope-ified site layout has a corrupt file.

I'll try to get in touch with the BeOpen web developers right away!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Sun Sep 24 23:20:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 24 Sep 2000 23:20:06 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCCELFHHAA.tim_one@email.msn.com>
Message-ID: <39CE7006.D60A603D@lemburg.com>

Tim Peters wrote:
> 
> unicodedatabase.c has 64K lines of the form:
> 
> /* U+009a */ { 13, 0, 15, 0, 0 },
> 
> Each struct getting initialized there takes 8 bytes on most machines (4
> unsigned chars + a char*).
> 
> However, there are only 3,567 unique structs (54,919 of them are all 0's!).

That's because there are only around 11k definitions in the
Unicode database -- most of the rest is divided into private,
user defined and surrogate high/low byte reserved ranges.

> So a braindead-easy mechanical "compression" scheme would simply be to
> create one vector with the 3,567 unique structs, and replace the 64K record
> constructors with 2-byte indices into that vector.  Data size goes down from
> 
>     64K * 8b = 512Kb
> 
> to
> 
>     3567 * 8b + 64K * 2b ~= 156Kb
> 
> at once; the source-code transformation is easy to do via a Python program;
> the compiler warnings on my platform (due to unicodedatabase.c's sheer size)
> can go away; and one indirection is added to access (which remains utterly
> uniform).
> 
> Previous objections to compression were, as far as I could tell, based on
> fear of elaborate schemes that rendered the code unreadable and the access
> code excruciating.  But if we can get more than a factor of 3 with little
> work and one new uniform indirection, do people still object?

Oh, there was no fear about making the code unreadable...
Christian and Fredrik were both working on smart schemes.
My only objection about these was missing documentation
and generation tools -- vast tables of completely random
looking byte data are unreadable ;-)
 
> If nobody objects by the end of today, I intend to do it.

+1 from here.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Sun Sep 24 23:25:34 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 17:25:34 -0400
Subject: [Python-Dev] installer difficulties
In-Reply-To: <200009242158.QAA06679@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com>

[Guido]
> I just downloaded it again and tried your code, and got the same value
> as Greg!  I also get Greg's error on Windows with the newly downloaded
> version.
>
> Conclusion: the new Zope-ified site layout has a corrupt file.
>
> I'll try to get in touch with the BeOpen web developers right away!

Thanks!  In the meantime, I pointed Greg to anonymous FTP at
python.beopen.com, in directory /pub/tmp/.  That's where I orginally
uploaded the installer, and I doubt our webmasters have had a chance to
corrupt it yet <0.9 wink>.





From mal at lemburg.com  Sun Sep 24 23:28:29 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 24 Sep 2000 23:28:29 +0200
Subject: [Python-Dev] FW: regarding the Python Developer posting...
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com>
Message-ID: <39CE71FD.8858B71D@lemburg.com>

Tim Peters wrote:
> 
> Dan, anyone can mail to python-dev at python.org.
> 
> Everyone else, this appears to be a followup on the Mac OSX compiler error.
> 
> Dan, I replied to that on comp.lang.python; if you have bugs to report
> (platform-specific or otherwise) against the current CVS tree, SourceForge
> is the best place to do it.  Since the 1.6 release is history, it's too late
> to change anything there.
> 
> -----Original Message-----
> From: Dan Wolfe [mailto:dkwolfe at pacbell.net]
> Sent: Saturday, September 23, 2000 5:35 PM
> To: tim_one at email.msn.com
> Subject: regarding the Python Developer posting...
> 
> Howdy Tim,
> 
> I can't send to the development list so your gonna have to suffer... ;-)
> 
> With regards to:
> 
> <http://www.python.org/pipermail/python-dev/2000-September/016188.html>
> 
> >cc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c -o unicodectype.o
> >unicodectyc
> >cc: Internal compiler error: program cpp-precomp got fatal signal
> 11make[1]:
> >*** [unicodectype.o] Error 1
> >make: *** [Objects] Error 2
> >dhcppc4:~/Python-1.6] root#
> 
> I believe it's a bug in the cpp pre-comp as it also appears under 2.0.
> I've been able to work around it by passing -traditional-cpp to the
> compiler and it doesn't complain... ;-)  I'll take it up with Stan Steb
> (the compiler guy) when I go into work on Monday.

You could try to enable the macro at the top of unicodectype.c:
 
#if defined(macintosh) || defined(MS_WIN64)
/*XXX This was required to avoid a compiler error for an early Win64
 * cross-compiler that was used for the port to Win64. When the platform is
 * released the MS_WIN64 inclusion here should no longer be necessary.
 */
/* This probably needs to be defined for some other compilers too. It breaks the
** 5000-label switch statement up into switches with around 1000 cases each.
*/
#define BREAK_SWITCH_UP return 1; } switch (ch) {
#else
#define BREAK_SWITCH_UP /* nothing */
#endif

If it does compile with the work-around enabled, please
give us a set of defines which identify the compiler and
platform so we can enable it per default for your setup.

> Now if I can just figure out the test_sre.py, I'll be happy. (eg it
> compiles and runs but is still not passing all the regression tests).

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Mon Sep 25 00:34:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 24 Sep 2000 17:34:28 -0500
Subject: [Python-Dev] installer difficulties
In-Reply-To: Your message of "Sun, 24 Sep 2000 17:25:34 -0400."
             <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCGELOHHAA.tim_one@email.msn.com> 
Message-ID: <200009242234.RAA06931@cj20424-a.reston1.va.home.com>

> Thanks!  In the meantime, I pointed Greg to anonymous FTP at
> python.beopen.com, in directory /pub/tmp/.  That's where I orginally
> uploaded the installer, and I doubt our webmasters have had a chance to
> corrupt it yet <0.9 wink>.

Other readers of this forum may find that there is other cruft there
that may appear useful; however I believe the files found there may
not be the correct versions either.

BTW, the source tarball on the new pythonlabs.com site is also
corrupt; the docs are bad links; I suspect that the RPMs are also
corrupt.  What an embarrassment.  (We proofread all the webpages but
never thought of testing the downloads!)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Sun Sep 24 23:39:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 24 Sep 2000 17:39:49 -0400
Subject: [Python-Dev] How about braindead Unicode "compression"?
In-Reply-To: <39CE7006.D60A603D@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>

[Tim]
>> Previous objections to compression were, as far as I could
>> tell, based on fear of elaborate schemes that rendered the code
>> unreadable and the access code excruciating.  But if we can get
>> more than a factor of 3 with little work and one new uniform
>> indirection, do people still object?

[M.-A. Lemburg]
> Oh, there was no fear about making the code unreadable...
> Christian and Fredrik were both working on smart schemes.
> My only objection about these was missing documentation
> and generation tools -- vast tables of completely random
> looking byte data are unreadable ;-)

OK, you weren't afraid of making the code unreadable, but you did object to
making it unreadable.  Got it <wink>.  My own view is that the C data table
source code "should be" generated by a straightforward Python program
chewing over the unicode.org data files.  But since that's the correct view,
I'm sure it's yours too.

>> If nobody objects by the end of today, I intend to do it.

> +1 from here.

/F and I talked about it offline.  We'll do *something* before the day is
done, and I suspect everyone will be happy.  Waiting for a superb scheme has
thus far stopped us from making any improvements at all, and at this late
point a Big Crude Yet Delicate Hammer is looking mighty attractive.

petitely y'rs  - tim





From effbot at telia.com  Mon Sep 25 00:01:06 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 00:01:06 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>
Message-ID: <008f01c02672$f3f1a100$766940d5@hagrid>

tim wrote:
> /F and I talked about it offline.  We'll do *something* before the day is
> done, and I suspect everyone will be happy.

Okay, I just went ahead and checked in a new version of the
unicodedata stuff, based on my earlier unidb work.

On windows, the new unicodedata PYD is 120k (down from 600k),
and the source distribution should be about 2 megabytes smaller
than before (!).

If you're on a non-windows platform, please try out the new code
as soon as possible.  You need to check out:

        Modules/unicodedata.c
        Modules/unicodedatabase.c
        Modules/unicodedatabase.h
        Modules/unicodedata_db.h (new file)

Let me know if there are any build problems.

I'll check in the code generator script as soon as I've figured out
where to put it...  (how about Tools/unicode?)

</F>




From mal at lemburg.com  Mon Sep 25 09:57:36 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 09:57:36 +0200
Subject: [Python-Dev] How about braindead Unicode "compression"?
References: <LNBBLJKPBEHFEDALKOLCKELPHHAA.tim_one@email.msn.com>
Message-ID: <39CF0570.FDDCF03C@lemburg.com>

Tim Peters wrote:
> 
> [Tim]
> >> Previous objections to compression were, as far as I could
> >> tell, based on fear of elaborate schemes that rendered the code
> >> unreadable and the access code excruciating.  But if we can get
> >> more than a factor of 3 with little work and one new uniform
> >> indirection, do people still object?
> 
> [M.-A. Lemburg]
> > Oh, there was no fear about making the code unreadable...
> > Christian and Fredrik were both working on smart schemes.
> > My only objection about these was missing documentation
> > and generation tools -- vast tables of completely random
> > looking byte data are unreadable ;-)
> 
> OK, you weren't afraid of making the code unreadable, but you did object to
> making it unreadable.  Got it <wink>. 

Ah yes, the old coffee syndrom again (or maybe just the jet-lag
watching Olympia in the very early morning hours).

What I meant was that I consider checking in unreadable
binary goop *without* documentation and generation tools
not a good idea. Now that Fredrik checked in the generators
as well, everything is fine.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Mon Sep 25 15:56:17 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 15:56:17 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules posixmodule.c,2.173,2.174
In-Reply-To: <200009251322.GAA21574@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Mon, Sep 25, 2000 at 06:22:04AM -0700
References: <200009251322.GAA21574@slayer.i.sourceforge.net>
Message-ID: <20000925155616.H20757@xs4all.nl>

On Mon, Sep 25, 2000 at 06:22:04AM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src/Modules
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv21486
> 
> Modified Files:
> 	posixmodule.c 
> Log Message:
> Add missing prototypes for the benefit of SunOS 4.1.4 */

These should go in pyport.h ! Unless you have some reason not to export them
to other file, but in that case we need to take a good look at the whole
pyport.h thing.

> + #if defined(sun) && !defined(__SVR4)
> + /* SunOS 4.1.4 doesn't have prototypes for these: */
> + extern int rename(const char *, const char *);
> + extern int pclose(FILE *);
> + extern int fclose(FILE *);
> + #endif
> + 


-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jim at interet.com  Mon Sep 25 15:55:56 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 09:55:56 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
Message-ID: <39CF596C.17BA4DC5@interet.com>

Martin von Loewis wrote:
> 
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries to
> > continue as far as possible (much like make -k) ?
> 
> The common approch is to insert or remove tokens, using some
> heuristics. In YACC, it is possible to add error productions to the
> grammar. Whenever an error occurs, the parser assigns all tokens to
> the "error" non-terminal until it concludes that it can perform a
> reduce action.

The following is based on trying (a great learning experience)
to write a better Python lint.

There are IMHO two problems with the current Python
grammar file.  It is not possible to express operator
precedence, so deliberate shift/reduce conflicts are
used instead.  That makes the parse tree complicated
and non intuitive.  And there is no provision for error
productions.  YACC has both of these as built-in features.

I also found speed problems with tokenize.py.  AFAIK,
it only exists because tokenizer.c does not provide
comments as tokens, but eats them instead.  We could
modify tokenizer.c, then make tokenize.py be the
interface to the fast C tokenizer.  This eliminates the
problem of updating both too.

So how about re-writing the Python grammar in YACC in
order to use its more advanced features??  The simple
YACC grammar I wrote for 1.5.2 plus an altered tokenizer.c
parsed the whole Lib/*.py in a couple seconds vs. 30
seconds for the first file using Aaron Watters' Python
lint grammar written in Python.

JimA



From bwarsaw at beopen.com  Mon Sep 25 16:18:36 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 25 Sep 2000 10:18:36 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
Message-ID: <14799.24252.537090.326130@anthem.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

    JCA> So how about re-writing the Python grammar in YACC in
    JCA> order to use its more advanced features??  The simple
    JCA> YACC grammar I wrote for 1.5.2 plus an altered tokenizer.c
    JCA> parsed the whole Lib/*.py in a couple seconds vs. 30
    JCA> seconds for the first file using Aaron Watters' Python
    JCA> lint grammar written in Python.

I've been wanting to check out Antlr (www.antlr.org) because it gives
us the /possibility/ to use the same grammar files for both CPython
and JPython.  One problem though is that it generates Java and C++ so
we'd be accepting our first C++ into the core if we went this route.

-Barry



From gward at mems-exchange.org  Mon Sep 25 16:40:09 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 10:40:09 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39C8C834.5E3B90E7@lemburg.com>; from mal@lemburg.com on Wed, Sep 20, 2000 at 04:22:44PM +0200
References: <39C8C834.5E3B90E7@lemburg.com>
Message-ID: <20000925104009.A1747@ludwig.cnri.reston.va.us>

On 20 September 2000, M.-A. Lemburg said:
> Would it be possible to write a Python syntax checker that doesn't
> stop processing at the first error it finds but instead tries
> to continue as far as possible (much like make -k) ?
> 
> If yes, could the existing Python parser/compiler be reused for
> such a tool ?


From gward at mems-exchange.org  Mon Sep 25 16:43:10 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 10:43:10 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <14799.24252.537090.326130@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 25, 2000 at 10:18:36AM -0400
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net>
Message-ID: <20000925104310.B1747@ludwig.cnri.reston.va.us>

On 25 September 2000, Barry A. Warsaw said:
> I've been wanting to check out Antlr (www.antlr.org) because it gives
> us the /possibility/ to use the same grammar files for both CPython
> and JPython.  One problem though is that it generates Java and C++ so
> we'd be accepting our first C++ into the core if we went this route.

Or contribute a C back-end to ANTLR -- I've been toying with this idea
for, ummm, too damn long now.  Years.

        Greg



From jeremy at beopen.com  Mon Sep 25 16:50:30 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 10:50:30 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CF596C.17BA4DC5@interet.com>
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
Message-ID: <14799.26166.965015.344977@bitdiddle.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

  JCA> The following is based on trying (a great learning experience)
  JCA> to write a better Python lint.

  JCA> There are IMHO two problems with the current Python grammar
  JCA> file.  It is not possible to express operator precedence, so
  JCA> deliberate shift/reduce conflicts are used instead.  That makes
  JCA> the parse tree complicated and non intuitive.  And there is no
  JCA> provision for error productions.  YACC has both of these as
  JCA> built-in features.

  JCA> I also found speed problems with tokenize.py.  AFAIK, it only
  JCA> exists because tokenizer.c does not provide comments as tokens,
  JCA> but eats them instead.  We could modify tokenizer.c, then make
  JCA> tokenize.py be the interface to the fast C tokenizer.  This
  JCA> eliminates the problem of updating both too.

  JCA> So how about re-writing the Python grammar in YACC in order to
  JCA> use its more advanced features??  The simple YACC grammar I
  JCA> wrote for 1.5.2 plus an altered tokenizer.c parsed the whole
  JCA> Lib/*.py in a couple seconds vs. 30 seconds for the first file
  JCA> using Aaron Watters' Python lint grammar written in Python.

Why not use the actual Python parser instead of tokenize.py?  I assume
it is also faster than Aaron's Python lint grammer written in Python.
The compiler in Tools/compiler uses the parser module internally and
produces an AST that is straightforward to use.  (The parse tree
produced by the parser module is fairly low-level.)

There was a thread (on the compiler-sig, I believe) where Moshe and I
noodled with a simple lint-like warnings framework based on the
compiler package.  I don't have the code we ended up with, but I found
an example checker in the mail archives and have included it below.
It checks for NameErrors.

I believe one useful change that Moshe and I arrived at was to avoid
the explicit stack that the code uses (via enterNamespace and
exitNamespace) and instead pass the namespace as an optional extra
argument to the visitXXX methods.

Jeremy

"""Check for NameErrors"""

from compiler import parseFile, walk
from compiler.misc import Stack, Set

import __builtin__
from UserDict import UserDict

class Warning:
    def __init__(self, filename, funcname, lineno):
        self.filename = filename
        self.funcname = funcname
        self.lineno = lineno

    def __str__(self):
        return self._template % self.__dict__

class UndefinedLocal(Warning):
    super_init = Warning.__init__
    
    def __init__(self, filename, funcname, lineno, name):
        self.super_init(filename, funcname, lineno)
        self.name = name

    _template = "%(filename)s:%(lineno)s  " \
                "%(funcname)s undefined local %(name)s"

class NameError(UndefinedLocal):
    _template = "%(filename)s:%(lineno)s  " \
                "%(funcname)s undefined name %(name)s"

class NameSet(UserDict):
    """Track names and the line numbers where they are referenced"""
    def __init__(self):
        self.data = self.names = {}

    def add(self, name, lineno):
        l = self.names.get(name, [])
        l.append(lineno)
        self.names[name] = l

class CheckNames:
    def __init__(self, filename):
        self.filename = filename
        self.warnings = []
        self.scope = Stack()
        self.gUse = NameSet()
        self.gDef = NameSet()
        # _locals is the stack of local namespaces
        # locals is the top of the stack
        self._locals = Stack()
        self.lUse = None
        self.lDef = None
        self.lGlobals = None # var declared global
        # holds scope,def,use,global triples for later analysis
        self.todo = []

    def enterNamespace(self, node):
        self.scope.push(node)
        self.lUse = use = NameSet()
        self.lDef = _def = NameSet()
        self.lGlobals = gbl = NameSet()
        self._locals.push((use, _def, gbl))

    def exitNamespace(self):
        self.todo.append((self.scope.top(), self.lDef, self.lUse,
                          self.lGlobals))
        self.scope.pop()
        self._locals.pop()
        if self._locals:
            self.lUse, self.lDef, self.lGlobals = self._locals.top()
        else:
            self.lUse = self.lDef = self.lGlobals = None

    def warn(self, warning, funcname, lineno, *args):
        args = (self.filename, funcname, lineno) + args
        self.warnings.append(apply(warning, args))

    def defName(self, name, lineno, local=1):
        if self.lUse is None:
            self.gDef.add(name, lineno)
        elif local == 0:
            self.gDef.add(name, lineno)
            self.lGlobals.add(name, lineno)
        else:
            self.lDef.add(name, lineno)

    def useName(self, name, lineno, local=1):
        if self.lUse is None:
            self.gUse.add(name, lineno)
        elif local == 0:
            self.gUse.add(name, lineno)
            self.lUse.add(name, lineno)            
        else:
            self.lUse.add(name, lineno)

    def check(self):
        for s, d, u, g in self.todo:
            self._check(s, d, u, g, self.gDef)
        # XXX then check the globals

    def _check(self, scope, _def, use, gbl, globals):
        # check for NameError
        # a name is defined iff it is in def.keys()
        # a name is global iff it is in gdefs.keys()
        gdefs = UserDict()
        gdefs.update(globals)
        gdefs.update(__builtin__.__dict__)
        defs = UserDict()
        defs.update(gdefs)
        defs.update(_def)
        errors = Set()
        for name in use.keys():
            if not defs.has_key(name):
                firstuse = use[name][0]
                self.warn(NameError, scope.name, firstuse, name)
                errors.add(name)

        # check for UndefinedLocalNameError
        # order == use & def sorted by lineno
        # elements are lineno, flag, name
        # flag = 0 if use, flag = 1 if def
        order = []
        for name, lines in use.items():
            if gdefs.has_key(name) and not _def.has_key(name):
                # this is a global ref, we can skip it
                continue
            for lineno in lines:
                order.append((lineno, 0, name))
        for name, lines in _def.items():
            for lineno in lines:
                order.append((lineno, 1, name))
        order.sort()
        # ready contains names that have been defined or warned about
        ready = Set()
        for lineno, flag, name in order:
            if flag == 0: # use
                if not ready.has_elt(name) and not errors.has_elt(name):
                    self.warn(UndefinedLocal, scope.name, lineno, name)
                    ready.add(name) # don't warn again
            else:
                ready.add(name)

    # below are visitor methods

    def visitFunction(self, node, noname=0):
        for expr in node.defaults:
            self.visit(expr)
        if not noname:
            self.defName(node.name, node.lineno)
        self.enterNamespace(node)
        for name in node.argnames:
            self.defName(name, node.lineno)
        self.visit(node.code)
        self.exitNamespace()
        return 1

    def visitLambda(self, node):
        return self.visitFunction(node, noname=1)

    def visitClass(self, node):
        for expr in node.bases:
            self.visit(expr)
        self.defName(node.name, node.lineno)
        self.enterNamespace(node)
        self.visit(node.code)
        self.exitNamespace()
        return 1

    def visitName(self, node):
        self.useName(node.name, node.lineno)

    def visitGlobal(self, node):
        for name in node.names:
            self.defName(name, node.lineno, local=0)

    def visitImport(self, node):
        for name, alias in node.names:
            self.defName(alias or name, node.lineno)

    visitFrom = visitImport

    def visitAssName(self, node):
        self.defName(node.name, node.lineno)
    
def check(filename):
    global p, checker
    p = parseFile(filename)
    checker = CheckNames(filename)
    walk(p, checker)
    checker.check()
    for w in checker.warnings:
        print w

if __name__ == "__main__":
    import sys

    # XXX need to do real arg processing
    check(sys.argv[1])




From nascheme at enme.ucalgary.ca  Mon Sep 25 16:57:42 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Mon, 25 Sep 2000 08:57:42 -0600
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925104009.A1747@ludwig.cnri.reston.va.us>; from Greg Ward on Mon, Sep 25, 2000 at 10:40:09AM -0400
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us>
Message-ID: <20000925085742.A26922@keymaster.enme.ucalgary.ca>

On Mon, Sep 25, 2000 at 10:40:09AM -0400, Greg Ward wrote:
> PCCTS 1.x (the precursor to ANTLR 2.x) is the only parser generator
> I've used personally

How different are PCCTS and ANTLR?  Perhaps we could use PCCTS for
CPython and ANTLR for JPython.

  Neil



From guido at beopen.com  Mon Sep 25 18:06:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 11:06:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules posixmodule.c,2.173,2.174
In-Reply-To: Your message of "Mon, 25 Sep 2000 15:56:17 +0200."
             <20000925155616.H20757@xs4all.nl> 
References: <200009251322.GAA21574@slayer.i.sourceforge.net>  
            <20000925155616.H20757@xs4all.nl> 
Message-ID: <200009251606.LAA19626@cj20424-a.reston1.va.home.com>

> > Modified Files:
> > 	posixmodule.c 
> > Log Message:
> > Add missing prototypes for the benefit of SunOS 4.1.4 */
> 
> These should go in pyport.h ! Unless you have some reason not to export them
> to other file, but in that case we need to take a good look at the whole
> pyport.h thing.
> 
> > + #if defined(sun) && !defined(__SVR4)
> > + /* SunOS 4.1.4 doesn't have prototypes for these: */
> > + extern int rename(const char *, const char *);
> > + extern int pclose(FILE *);
> > + extern int fclose(FILE *);
> > + #endif
> > + 

Maybe, but tyere's already tons of platform specific junk in
posixmodule.c.  Given we're so close to the code freeze, let's not do
it right now.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jim at interet.com  Mon Sep 25 17:05:56 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 11:05:56 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
		<39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net>
Message-ID: <39CF69D4.E3649C69@interet.com>

"Barry A. Warsaw" wrote:
> I've been wanting to check out Antlr (www.antlr.org) because it gives
> us the /possibility/ to use the same grammar files for both CPython
> and JPython.  One problem though is that it generates Java and C++ so
> we'd be accepting our first C++ into the core if we went this route.

Yes, but why not YACC?  Is Antlr so much better, or is
YACC too primitive, or what?  IMHO, adding C++ just for
parsing is not going to happen, so Antlr is not going to
happen either.

JimA



From gward at mems-exchange.org  Mon Sep 25 17:07:53 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 11:07:53 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925085742.A26922@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Mon, Sep 25, 2000 at 08:57:42AM -0600
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us> <20000925085742.A26922@keymaster.enme.ucalgary.ca>
Message-ID: <20000925110752.A1891@ludwig.cnri.reston.va.us>

On 25 September 2000, Neil Schemenauer said:
> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS for
> CPython and ANTLR for JPython.

I can't speak from experience; I've only looked briefly at ANTLR.  But
it looks like they are as different as two LL(k) parser generators
written by the same guy can be.  Ie. same general philosophy, but not
much similar apart from that.

Also, to be blunt, the C back-end PCCTS 1.x has a lot of serious
problems.  It's heavily dependent on global variables, so goodbye to a
thread-safe lexer/parser.  It uses boatloads of tricky macros, which
makes debugging the lexer a bear.  It's well-nigh impossible to remember
which macros are defined in which .c files, which functions are defined
in which .h files, and so forth.  (No really! it's like that!)

I think it would be much healthier to take the sound OO thinking that
went into the original C++ back-end for PCCTS 1.x, and that evolved
further with the Java and C++ back-ends for ANTLR 2.x, and do the same
sort of stuff in C.  Writing good solid code in C isn't impossible, it's
just tricky.  And the code generated by PCCTS 1.x is *not* good solid C
code (IMHO).

        Greg



From cgw at fnal.gov  Mon Sep 25 17:12:35 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 25 Sep 2000 10:12:35 -0500 (CDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <20000925085742.A26922@keymaster.enme.ucalgary.ca>
References: <39C8C834.5E3B90E7@lemburg.com>
	<20000925104009.A1747@ludwig.cnri.reston.va.us>
	<20000925085742.A26922@keymaster.enme.ucalgary.ca>
Message-ID: <14799.27491.414160.577996@buffalo.fnal.gov>

I think that as much as can be done with Python rather than using
external code like Antlr, the better.  Who cares if it is slow?  I
could imagine a 2-pass approach where the internal Python parser is
used to construct a parse tree which is then checked for certain
errors.  I wrote something like this to check for mismatched numbers
of '%' values and arguments in string-formatting operations (see
http://home.fnal.gov/~cgw/python/check_pct.html if you are
interested).

Only sections of code which cannot be parsed by Python's internal
parser would then need to be checked by the "stage 2" checker, which
could afford to give up speed for accuracy.  This is the part I think
should be done in Python... for all the reasons we like Python;
flexibility, maintainabilty, etc.





From bwarsaw at beopen.com  Mon Sep 25 17:23:40 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 25 Sep 2000 11:23:40 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
	<14799.24252.537090.326130@anthem.concentric.net>
	<20000925104310.B1747@ludwig.cnri.reston.va.us>
Message-ID: <14799.28156.687176.869540@anthem.concentric.net>

>>>>> "GW" == Greg Ward <gward at mems-exchange.org> writes:

    GW> Or contribute a C back-end to ANTLR -- I've been toying with
    GW> this idea for, ummm, too damn long now.  Years.

Yes (to both :).

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

    NS> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS
    NS> for CPython and ANTLR for JPython.

Unknown.  It would only make sense if the same grammar files could be
fed to each.  I have no idea whether that's true or not.  If not,
Greg's idea is worth researching.

-Barry



From loewis at informatik.hu-berlin.de  Mon Sep 25 17:36:24 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 25 Sep 2000 17:36:24 +0200 (MET DST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CF69D4.E3649C69@interet.com> (jim@interet.com)
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
		<39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com>
Message-ID: <200009251536.RAA26375@pandora.informatik.hu-berlin.de>

> Yes, but why not YACC?  Is Antlr so much better, or is
> YACC too primitive, or what?  IMHO, adding C++ just for
> parsing is not going to happen, so Antlr is not going to
> happen either.

I think the advantage that Barry saw is that ANTLR generates Java in
addition to C, so it could be used in JPython as well. In addition,
ANTLR is more advanced than YACC; it specifically supports full EBNF
as input, and has better mechanisms for conflict resolution.

On the YACC for Java side, Axel Schreiner has developed jay, see
http://www2.informatik.uni-osnabrueck.de/bernd/jay/staff/design/de/Artikel.htmld/
(if you read German, otherwise don't bother :-)

The main problem with multilanguage output is the semantic actions -
it would be quite a stunt to put semantic actions into the parser
which are valid both in C and Java :-) On that front, there is also
CUP (http://www.cs.princeton.edu/~appel/modern/java/CUP/), which has
different markup for Java actions ({: ... :}).

There is also BYACC/J, a patch to Berkeley Yacc to produce Java
(http://www.lincom-asg.com/~rjamison/byacc/).

Personally, I'm quite in favour of having the full parser source
(including parser generator if necessary) in the Python source
distribution. As a GCC contributor, I know what pain it is for users
that GCC requires bison to build - even though it is only required for
CVS builds, as distributions come with the generated files.

Regards,
Martin




From gward at mems-exchange.org  Mon Sep 25 18:22:35 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 12:22:35 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <14799.28156.687176.869540@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Sep 25, 2000 at 11:23:40AM -0400
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <20000925104310.B1747@ludwig.cnri.reston.va.us> <14799.28156.687176.869540@anthem.concentric.net>
Message-ID: <20000925122235.A2167@ludwig.cnri.reston.va.us>

On 25 September 2000, Barry A. Warsaw said:
>     NS> How different are PCCTS and ANTLR?  Perhaps we could use PCCTS
>     NS> for CPython and ANTLR for JPython.
> 
> Unknown.  It would only make sense if the same grammar files could be
> fed to each.  I have no idea whether that's true or not.  If not,
> Greg's idea is worth researching.

PCCTS 1.x grammar files tend to have lots of C code interwoven in them
-- at least for tricky, ill-defined grammars like BibTeX.  ;-)

ANTLR 2.x grammars certainly allow Java code to be woven into them; I
assume you can instead weave C++ or Sather if that's your preference.
Obviously, this would be one problem with having a common grammar for
JPython and CPython.

        Greg



From mal at lemburg.com  Mon Sep 25 18:39:22 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 18:39:22 +0200
Subject: [Python-Dev] Python syntax checker ?
References: <39C8C834.5E3B90E7@lemburg.com> <20000925104009.A1747@ludwig.cnri.reston.va.us>
Message-ID: <39CF7FBA.A54C40D@lemburg.com>

Greg Ward wrote:
> 
> On 20 September 2000, M.-A. Lemburg said:
> > Would it be possible to write a Python syntax checker that doesn't
> > stop processing at the first error it finds but instead tries
> > to continue as far as possible (much like make -k) ?
> >
> > If yes, could the existing Python parser/compiler be reused for
> > such a tool ?
> 
> >From what I understand of Python's parser and parser generator, no.
> Recovering from errors is indeed highly non-trivial.  If you're really
> interested, I'd look into Terence Parr's ANTLR -- it's a very fancy
> parser generator that's waaay ahead of pgen (or lex/yacc, for that
> matter).  ANTLR 2.x is highly Java-centric, and AFAIK doesn't yet have a
> C backend (grumble) -- just C++ and Java.  (Oh wait, the antlr.org web
> site says it can generate Sather too -- now there's an important
> mainstream language!  ;-)

Thanks, I'll have a look.
 
> Tech notes: like pgen, ANTLR is LL; it generates a recursive-descent
> parser.  Unlike pgen, ANTLR is LL(k) -- it can support arbitrary
> lookahead, although k>2 can make parser generation expensive (not
> parsing itself, just turning your grammar into code), as well as make
> your language harder to understand.  (I have a theory that pgen's k=1
> limitation has been a brick wall in the way of making Python's syntax
> more complex, i.e. it's a *feature*!)
> 
> More importantly, ANTLR has good support for error recovery.  My BibTeX
> parser has a lot of fun recovering from syntax errors, and (with a
> little smoke 'n mirrors magic in the lexing stage) does a pretty good
> job of it.  But you're right, it's *not* trivial to get this stuff
> right.  And without support from the parser generator, I suspect you
> would be in a world of hurtin'.

I was actually thinking of extracting the Python tokenizer and
parser from the Python source and tweaking it until it did
what I wanted it to do, ie. not generate valid code but produce
valid error messages ;-)

Now from the feedback I got it seems that this is not the
right approach. I'm not even sure whether using a parser
at all is the right way... I may have to stick to a fairly
general tokenizer and then try to solve the problem in chunks
of code (much like what Guido hinted at in his reply), possibly
even by doing trial and error using the Python builtin compiler
on these chunks.

Oh well,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Mon Sep 25 19:04:18 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 13:04:18 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <200009251700.KAA27700@slayer.i.sourceforge.net>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
Message-ID: <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > fix bug #114290: when interpreter's argv[0] has a relative path make
 >     it absolute by joining it with getcwd result.  avoid including
 >     unnecessary ./ in path but do not test for ../ (more complicated)
...
 > +     else if (argv0_path[0] == '.') {
 > + 	getcwd(path, MAXPATHLEN);
 > + 	if (argv0_path[1] == '/') 
 > + 	    joinpath(path, argv0_path + 2);

  Did you test this when argv[0] is something like './/foo/bin/python'?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Mon Sep 25 19:18:21 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 19:18:21 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com>
Message-ID: <016e01c02714$f945bc20$766940d5@hagrid>

in response to a OS X compiler problem, mal wrote:
> You could try to enable the macro at the top of unicodectype.c:
>  
> #if defined(macintosh) || defined(MS_WIN64)
> /*XXX This was required to avoid a compiler error for an early Win64
>  * cross-compiler that was used for the port to Win64. When the platform is
>  * released the MS_WIN64 inclusion here should no longer be necessary.
>  */
> /* This probably needs to be defined for some other compilers too. It breaks the
> ** 5000-label switch statement up into switches with around 1000 cases each.
> */
> #define BREAK_SWITCH_UP return 1; } switch (ch) {
> #else
> #define BREAK_SWITCH_UP /* nothing */
> #endif
> 
> If it does compile with the work-around enabled, please
> give us a set of defines which identify the compiler and
> platform so we can enable it per default for your setup.

I have a 500k "negative patch" sitting on my machine which removes
most of unicodectype.c, replacing it with a small data table (based on
the same unidb work as yesterdays unicodedatabase patch).

out
</F>

# dump all known unicode data

import unicodedata

for i in range(65536):
    char = unichr(i)
    data = (
        # ctype predicates
        char.isalnum(),
        char.isalpha(),
        char.isdecimal(),
        char.isdigit(),
        char.islower(),
        char.isnumeric(),
        char.isspace(),
        char.istitle(),
        char.isupper(),
        # ctype mappings
        char.lower(),
        char.upper(),
        char.title(),
        # properties
        unicodedata.digit(char, None),
        unicodedata.numeric(char, None),
        unicodedata.decimal(char, None),
        unicodedata.category(char),
        unicodedata.bidirectional(char),
        unicodedata.decomposition(char),
        unicodedata.mirrored(char),
        unicodedata.combining(char)
        )





From effbot at telia.com  Mon Sep 25 19:27:19 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 19:27:19 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid>
Message-ID: <017801c02715$ebcc38c0$766940d5@hagrid>

oops.  mailer problem; here's the rest of the mail:

> I have a 500k "negative patch" sitting on my machine which removes
> most of unicodectype.c, replacing it with a small data table (based on
> the same unidb work as yesterdays unicodedatabase patch).

(this shaves another another 400-500k off the source distribution,
and 10-20k in the binaries...)

I've verified that all ctype-related methods eturn the same result
as before the patch, for all characters in the unicode set (see the
attached script).

should I check it in?

</F>




From mal at lemburg.com  Mon Sep 25 19:46:21 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 25 Sep 2000 19:46:21 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python 
 Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid>
Message-ID: <39CF8F6D.3F32C8FD@lemburg.com>

Fredrik Lundh wrote:
> 
> oops.  mailer problem; here's the rest of the mail:
> 
> > I have a 500k "negative patch" sitting on my machine which removes
> > most of unicodectype.c, replacing it with a small data table (based on
> > the same unidb work as yesterdays unicodedatabase patch).
> 
> (this shaves another another 400-500k off the source distribution,
> and 10-20k in the binaries...)
> 
> I've verified that all ctype-related methods eturn the same result
> as before the patch, for all characters in the unicode set (see the
> attached script).
> 
> should I check it in?

Any chance of taking a look at it first ? (BTW, what happened to the
usual post to SF, review, then checkin cycle ?)

The C type checks are a little performance sensitive since they
are used on a char by char basis in the C implementation of
.upper(), etc. -- do the new methods give the same performance ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Mon Sep 25 19:55:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 13:55:49 -0400
Subject: [Python-Dev] last second patches (was: regarding the Python  Developer posting...)
In-Reply-To: <39CF8F6D.3F32C8FD@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEOKHHAA.tim_one@email.msn.com>

[M.-A. Lemburg, on /F's Unicode patches]
> Any chance of taking a look at it first ? (BTW, what happened to the
> usual post to SF, review, then checkin cycle ?)

I encouraged /F *not* to submit a patch for the unicodedatabase.c change.
He knows what he's doing, experts in an area are allowed (see PEP200) to
skip the patch business, and we're trying to make quick progress before
2.0b2 ships.

This change may be more controversial, though:

> The C type checks are a little performance sensitive since they
> are used on a char by char basis in the C implementation of
> .upper(), etc. -- do the new methods give the same performance ?

Don't know.  Although it's hard to imagine we have any Unicode apps out
there now that will notice one way or the other <wink>.





From effbot at telia.com  Mon Sep 25 20:08:22 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 20:08:22 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python  Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>
Message-ID: <003601c0271c$1b814c80$766940d5@hagrid>

mal wrote:
> Any chance of taking a look at it first ?

same as unicodedatabase.c, just other data.

> (BTW, what happened to the usual post to SF, review, then
> checkin cycle ?)

two problems: SF cannot handle patches larger than 500k.
and we're in ship mode...

> The C type checks are a little performance sensitive since they
> are used on a char by char basis in the C implementation of
> .upper(), etc. -- do the new methods give the same performance ?

well, they're about 40% faster on my box.  ymmv, of course.

</F>




From gward at mems-exchange.org  Mon Sep 25 20:05:12 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Mon, 25 Sep 2000 14:05:12 -0400
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009251536.RAA26375@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Mon, Sep 25, 2000 at 05:36:24PM +0200
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
Message-ID: <20000925140511.A2319@ludwig.cnri.reston.va.us>

On 25 September 2000, Martin von Loewis said:
> Personally, I'm quite in favour of having the full parser source
> (including parser generator if necessary) in the Python source
> distribution. As a GCC contributor, I know what pain it is for users
> that GCC requires bison to build - even though it is only required for
> CVS builds, as distributions come with the generated files.

This would be a strike against ANTLR, since it's written in Java -- and
therefore is about as portable as a church.  ;-(

It should be possible to generate good, solid, portable C code... but
AFAIK no one has done so to date with ANTLR 2.x.

        Greg



From jeremy at beopen.com  Mon Sep 25 20:11:12 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 14:11:12 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
	<14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
Message-ID: <14799.38208.987507.250305@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Did you test this when argv[0] is something like
  FLD> './/foo/bin/python'? 

No.  Two questions: What would that mean? How could I generate it?

Jeremy





From fdrake at beopen.com  Mon Sep 25 20:07:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 14:07:00 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.38208.987507.250305@bitdiddle.concentric.net>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
	<14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
	<14799.38208.987507.250305@bitdiddle.concentric.net>
Message-ID: <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 >   FLD> Did you test this when argv[0] is something like
 >   FLD> './/foo/bin/python'? 
 > 
 > No.  Two questions: What would that mean? How could I generate it?

  That should mean the same as './foo/bin/python' since multiple '/'
are equivalent to a single '/' on Unix.  (Same for r'\' on Windows
since this won't interfere with UNC paths (like '\\host\foo\bin...')).
  You can do this using fork/exec.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Mon Sep 25 20:20:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 14:20:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules getpath.c,1.30,1.31
In-Reply-To: <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
References: <200009251700.KAA27700@slayer.i.sourceforge.net>
	<14799.34194.855026.395907@cj42289-a.reston1.va.home.com>
	<14799.38208.987507.250305@bitdiddle.concentric.net>
	<14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
Message-ID: <14799.38756.174565.664691@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Jeremy Hylton writes: Did you test this when argv[0] is
  FLD> something like './/foo/bin/python'?
  >>
  >> No.  Two questions: What would that mean? How could I generate
  >> it?

  FLD>   That should mean the same as './foo/bin/python' since
  FLD>   multiple '/' are equivalent to a single '/' on Unix.

Ok.  Tested with os.execv and it works correctly.

Did you see my query (in private email) about 1) whether it works on
Windows and 2) whether I should worry about platforms that don't have
a valid getcwd?

Jeremy





From effbot at telia.com  Mon Sep 25 20:26:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 20:26:16 +0200
Subject: [Python-Dev] CVS problems
References: <200009251700.KAA27700@slayer.i.sourceforge.net><14799.34194.855026.395907@cj42289-a.reston1.va.home.com><14799.38208.987507.250305@bitdiddle.concentric.net> <14799.37956.408416.190160@cj42289-a.reston1.va.home.com>
Message-ID: <006c01c0271e$1a72b0c0$766940d5@hagrid>

> cvs add Objects\unicodetype_db.h
cvs server: scheduling file `Objects/unicodetype_db.h' for addition
cvs server: use 'cvs commit' to add this file permanently

> cvs commit Objects\unicodetype_db.h
cvs server: [11:05:10] waiting for anoncvs_python's lock in /cvsroot/python/python/dist/src/Objects

yet another stale lock?  if so, what happened?  and more
importantly, how do I get rid of it?

</F>




From thomas at xs4all.net  Mon Sep 25 20:23:22 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 20:23:22 +0200
Subject: [Python-Dev] CVS problems
In-Reply-To: <006c01c0271e$1a72b0c0$766940d5@hagrid>; from effbot@telia.com on Mon, Sep 25, 2000 at 08:26:16PM +0200
References: <200009251700.KAA27700@slayer.i.sourceforge.net><14799.34194.855026.395907@cj42289-a.reston1.va.home.com><14799.38208.987507.250305@bitdiddle.concentric.net> <14799.37956.408416.190160@cj42289-a.reston1.va.home.com> <006c01c0271e$1a72b0c0$766940d5@hagrid>
Message-ID: <20000925202322.I20757@xs4all.nl>

On Mon, Sep 25, 2000 at 08:26:16PM +0200, Fredrik Lundh wrote:
> > cvs add Objects\unicodetype_db.h
> cvs server: scheduling file `Objects/unicodetype_db.h' for addition
> cvs server: use 'cvs commit' to add this file permanently
> 
> > cvs commit Objects\unicodetype_db.h
> cvs server: [11:05:10] waiting for anoncvs_python's lock in /cvsroot/python/python/dist/src/Objects
> 
> yet another stale lock?  if so, what happened?  and more
> importantly, how do I get rid of it?

This might not be a stale lock. Because it's anoncvs's lock, it can't be a
write lock. I've seen this before (mostly on checking out) and it does take
quite a bit for the CVS process to continue :P But in my cases, eventually
it did. If it stays longer than, say, 30m, it's probably
SF-bug-reporting-time again :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Mon Sep 25 20:24:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 14:24:25 -0400
Subject: [Python-Dev] CVS problems
In-Reply-To: <006c01c0271e$1a72b0c0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>

[Fredrik Lundh]
> > cvs add Objects\unicodetype_db.h
> cvs server: scheduling file `Objects/unicodetype_db.h' for addition
> cvs server: use 'cvs commit' to add this file permanently
>
> > cvs commit Objects\unicodetype_db.h
> cvs server: [11:05:10] waiting for anoncvs_python's lock in
> /cvsroot/python/python/dist/src/Objects
>
> yet another stale lock?  if so, what happened?  and more
> importantly, how do I get rid of it?

I expect this one goes away by itself -- anoncvs can't be doing a commit,
and I don't believe we've ever seen a stale lock from anoncvs.  Probably
just some fan doing their first read-only checkout over a slow line.  BTW, I
just did a full update & didn't get any lock msgs.  Try again!





From effbot at telia.com  Mon Sep 25 21:04:26 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 21:04:26 +0200
Subject: [Python-Dev] CVS problems
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
Message-ID: <00bc01c02723$6f8faf40$766940d5@hagrid>

tim wrote:> > > cvs commit Objects\unicodetype_db.h
> > cvs server: [11:05:10] waiting for anoncvs_python's lock in
> > /cvsroot/python/python/dist/src/Objects
> >
> I expect this one goes away by itself -- anoncvs can't be doing a commit,
> and I don't believe we've ever seen a stale lock from anoncvs.  Probably
> just some fan doing their first read-only checkout over a slow line.

I can update alright, but I still get this message when I try
to commit stuff.  this message, or timeouts from the server.

annoying...

</F>




From guido at beopen.com  Mon Sep 25 22:21:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 15:21:11 -0500
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
In-Reply-To: Your message of "Mon, 25 Sep 2000 20:08:22 +0200."
             <003601c0271c$1b814c80$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>  
            <003601c0271c$1b814c80$766940d5@hagrid> 
Message-ID: <200009252021.PAA20146@cj20424-a.reston1.va.home.com>

> mal wrote:
> > Any chance of taking a look at it first ?
> 
> same as unicodedatabase.c, just other data.
> 
> > (BTW, what happened to the usual post to SF, review, then
> > checkin cycle ?)
> 
> two problems: SF cannot handle patches larger than 500k.
> and we're in ship mode...
> 
> > The C type checks are a little performance sensitive since they
> > are used on a char by char basis in the C implementation of
> > .upper(), etc. -- do the new methods give the same performance ?
> 
> well, they're about 40% faster on my box.  ymmv, of course.

Fredrik, why don't you make your patch available for review by
Marc-Andre -- after all he "owns" this code (is the original author).
If Marc-Andre agrees, and Jeremy has enough time to finish the release
on time, I have no problem with checking it in.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Mon Sep 25 22:02:25 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 16:02:25 -0400 (EDT)
Subject: [Python-Dev] CVS problems
In-Reply-To: <00bc01c02723$6f8faf40$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
	<00bc01c02723$6f8faf40$766940d5@hagrid>
Message-ID: <14799.44881.753935.662313@bitdiddle.concentric.net>

>>>>> "FL" == Fredrik Lundh <effbot at telia.com> writes:

  FL>> cvs commit Objects\unicodetype_db.h
  >> > cvs server: [11:05:10] waiting for anoncvs_python's lock in
  >> > /cvsroot/python/python/dist/src/Objects
  >> >
  [tim wrote:]
  >> I expect this one goes away by itself -- anoncvs can't be doing a
  >> commit, and I don't believe we've ever seen a stale lock from
  >> anoncvs.  Probably just some fan doing their first read-only
  >> checkout over a slow line.

  FL> I can update alright, but I still get this message when I try to
  FL> commit stuff.  this message, or timeouts from the server.

  FL> annoying...

It's still there now, about an hour later.  I can't even tag the tree
with the r20b2 marker, of course.

How do we submit an SF admin request?

Jeremy



From effbot at telia.com  Mon Sep 25 22:31:06 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 22:31:06 +0200
Subject: [Python-Dev] CVS problems
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com><00bc01c02723$6f8faf40$766940d5@hagrid> <14799.44881.753935.662313@bitdiddle.concentric.net>
Message-ID: <006901c0272f$ce106120$766940d5@hagrid>

jeremy wrote:

> It's still there now, about an hour later.  I can't even tag the tree
> with the r20b2 marker, of course.
> 
> How do we submit an SF admin request?

I've already submitted a support request.  not that anyone
seems to be reading them, though -- the oldest unassigned
request is from September 19th...

anyone knows anyone at sourceforge?

</F>




From effbot at telia.com  Mon Sep 25 22:49:47 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Mon, 25 Sep 2000 22:49:47 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com>              <003601c0271c$1b814c80$766940d5@hagrid>  <200009252021.PAA20146@cj20424-a.reston1.va.home.com>
Message-ID: <008101c02732$29fbf4c0$766940d5@hagrid>

> Fredrik, why don't you make your patch available for review by
> Marc-Andre -- after all he "owns" this code (is the original author).

hey, *I* wrote the original string type, didn't I? ;-)

anyway, the new unicodectype.c file is here:
http://sourceforge.net/patch/download.php?id=101652

(the patch is 500k, the new file 14k)

the new data file is here:
http://sourceforge.net/patch/download.php?id=101653

the new generator script is already in the repository
(Tools/unicode/makeunicodedata.py)

</F>




From fdrake at beopen.com  Mon Sep 25 22:39:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 25 Sep 2000 16:39:35 -0400 (EDT)
Subject: [Python-Dev] CVS problems
In-Reply-To: <006901c0272f$ce106120$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com>
	<00bc01c02723$6f8faf40$766940d5@hagrid>
	<14799.44881.753935.662313@bitdiddle.concentric.net>
	<006901c0272f$ce106120$766940d5@hagrid>
Message-ID: <14799.47111.674769.204798@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > anyone knows anyone at sourceforge?

  I'll send an email.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jim at interet.com  Mon Sep 25 22:48:28 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 25 Sep 2000 16:48:28 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
			<39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de>
Message-ID: <39CFBA1C.3E05B760@interet.com>

Martin von Loewis wrote:
> 
>> Yes, but why not YACC?  Is Antlr so much better, or is

> I think the advantage that Barry saw is that ANTLR generates Java in
> addition to C, so it could be used in JPython as well. In addition,
> ANTLR is more advanced than YACC; it specifically supports full EBNF
> as input, and has better mechanisms for conflict resolution.

Oh, OK.  Thanks.
 
> Personally, I'm quite in favour of having the full parser source
> (including parser generator if necessary) in the Python source
> distribution. As a GCC contributor, I know what pain it is for users
> that GCC requires bison to build - even though it is only required for
> CVS builds, as distributions come with the generated files.

I see your point, but the practical solution that we can
do today is to use YACC, bison, and distribute the generated
parser files.

Jim



From jeremy at beopen.com  Mon Sep 25 23:14:02 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 25 Sep 2000 17:14:02 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <39CFBA1C.3E05B760@interet.com>
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
	<14799.24252.537090.326130@anthem.concentric.net>
	<39CF69D4.E3649C69@interet.com>
	<200009251536.RAA26375@pandora.informatik.hu-berlin.de>
	<39CFBA1C.3E05B760@interet.com>
Message-ID: <14799.49178.2354.77727@bitdiddle.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

  >> Personally, I'm quite in favour of having the full parser source
  >> (including parser generator if necessary) in the Python source
  >> distribution. As a GCC contributor, I know what pain it is for
  >> users that GCC requires bison to build - even though it is only
  >> required for CVS builds, as distributions come with the generated
  >> files.

  JCA> I see your point, but the practical solution that we can do
  JCA> today is to use YACC, bison, and distribute the generated
  JCA> parser files.

I don't understand what problem this is a practical solution to.
This thread started with MAL's questions about finding errors in
Python code.  You mentioned an effort to write a lint-like tool.
It may be that YACC has great support for error recovery, in which
case MAL might want to look at for his tool.

But in general, the most practical solution for parsing Python is
probably to use the Python parser and the builtin parser module.  It
already exists and seems to work just fine.

Jeremy



From thomas at xs4all.net  Mon Sep 25 23:27:01 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 25 Sep 2000 23:27:01 +0200
Subject: [Python-Dev] CVS problems
In-Reply-To: <006901c0272f$ce106120$766940d5@hagrid>; from effbot@telia.com on Mon, Sep 25, 2000 at 10:31:06PM +0200
References: <LNBBLJKPBEHFEDALKOLCMEOMHHAA.tim_one@email.msn.com><00bc01c02723$6f8faf40$766940d5@hagrid> <14799.44881.753935.662313@bitdiddle.concentric.net> <006901c0272f$ce106120$766940d5@hagrid>
Message-ID: <20000925232701.J20757@xs4all.nl>

On Mon, Sep 25, 2000 at 10:31:06PM +0200, Fredrik Lundh wrote:
> jeremy wrote:

> > It's still there now, about an hour later.  I can't even tag the tree
> > with the r20b2 marker, of course.
> > 
> > How do we submit an SF admin request?
> 
> I've already submitted a support request.  not that anyone
> seems to be reading them, though -- the oldest unassigned
> request is from September 19th...

> anyone knows anyone at sourceforge?

I've had good results mailing 'staff at sourceforge.net' -- but only in real
emergencies (one of the servers was down, at the time.) That isn't to say
you or someone else shouldn't use it now (it's delaying the beta, after all,
which is kind of an emergency) but I just can't say how fast they'll respond
to such a request :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Mon Sep 25 23:33:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 25 Sep 2000 17:33:27 -0400
Subject: [Python-Dev] CVS problems
In-Reply-To: <20000925232701.J20757@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPPHHAA.tim_one@email.msn.com>

The CVS problem has been fixed.





From mal at lemburg.com  Tue Sep 26 00:35:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 26 Sep 2000 00:35:34 +0200
Subject: [Python-Dev] last second patches (was: regarding the Python  
 Developer posting...)
References: <LNBBLJKPBEHFEDALKOLCMEKAHHAA.tim_one@email.msn.com> <39CE71FD.8858B71D@lemburg.com> <016e01c02714$f945bc20$766940d5@hagrid> <017801c02715$ebcc38c0$766940d5@hagrid> <39CF8F6D.3F32C8FD@lemburg.com> <003601c0271c$1b814c80$766940d5@hagrid>
Message-ID: <39CFD336.C5B6DB4D@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > The C type checks are a little performance sensitive since they
> > are used on a char by char basis in the C implementation of
> > .upper(), etc. -- do the new methods give the same performance ?
> 
> well, they're about 40% faster on my box.  ymmv, of course.

Hmm, I get a 1% performance downgrade on Linux using pgcc, but
in the end its a win anyways :-)

What remains are the nits I posted to SF.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Tue Sep 26 03:44:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 25 Sep 2000 20:44:58 -0500
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: Your message of "Mon, 25 Sep 2000 17:14:02 -0400."
             <14799.49178.2354.77727@bitdiddle.concentric.net> 
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de> <39CF596C.17BA4DC5@interet.com> <14799.24252.537090.326130@anthem.concentric.net> <39CF69D4.E3649C69@interet.com> <200009251536.RAA26375@pandora.informatik.hu-berlin.de> <39CFBA1C.3E05B760@interet.com>  
            <14799.49178.2354.77727@bitdiddle.concentric.net> 
Message-ID: <200009260144.UAA25752@cj20424-a.reston1.va.home.com>

> I don't understand what problem this is a practical solution to.
> This thread started with MAL's questions about finding errors in
> Python code.  You mentioned an effort to write a lint-like tool.
> It may be that YACC has great support for error recovery, in which
> case MAL might want to look at for his tool.
> 
> But in general, the most practical solution for parsing Python is
> probably to use the Python parser and the builtin parser module.  It
> already exists and seems to work just fine.

Probably not that relevant any more, but MAL originally asked for a
parser that doesn't stop at the first error.  That's a real weakness
of the existing parser!!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From greg at cosc.canterbury.ac.nz  Tue Sep 26 03:13:19 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 26 Sep 2000 13:13:19 +1200 (NZST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009260144.UAA25752@cj20424-a.reston1.va.home.com>
Message-ID: <200009260113.NAA23556@s454.cosc.canterbury.ac.nz>

Guido:

> MAL originally asked for a
> parser that doesn't stop at the first error.  That's a real weakness
> of the existing parser!!!

Is it really worth putting a lot of effort into this?
In my experience, the vast majority of errors I get from
Python are run-time errors, not parse errors.

(If you could find multiple run-time errors in one go,
*that* would be an impressive trick!)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From mwh21 at cam.ac.uk  Tue Sep 26 14:15:26 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Tue, 26 Sep 2000 13:15:26 +0100 (BST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <200009260113.NAA23556@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.SOL.4.21.0009261309240.22922-100000@yellow.csi.cam.ac.uk>

On Tue, 26 Sep 2000, Greg Ewing wrote:

> Guido:
> 
> > MAL originally asked for a
> > parser that doesn't stop at the first error.  That's a real weakness
> > of the existing parser!!!
> 
> Is it really worth putting a lot of effort into this?

It might be if you were trying to develop an IDE that could syntactically
analyse what the user was typing even if he/she had left a half finished
expression further up in the buffer (I'd kind of assumed this was the
goal).  So you're not continuing after errors, exactly, more like
unfinishednesses (or some better word...).

I guess one approach to this would be to divided up the buffer according
to indentation and then parse each block as delimited by the indentation
individually.

Two random points:

1) Triple-quoted strings are going to be a problem.
2) Has anyone gotten flex to tokenize Python?  I was looking at the manual
   yesterday and it didn't look impossible, although a bit tricky.

Cheers,
M.




From jim at interet.com  Tue Sep 26 15:23:47 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Tue, 26 Sep 2000 09:23:47 -0400
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
		<39CF596C.17BA4DC5@interet.com>
		<14799.24252.537090.326130@anthem.concentric.net>
		<39CF69D4.E3649C69@interet.com>
		<200009251536.RAA26375@pandora.informatik.hu-berlin.de>
		<39CFBA1C.3E05B760@interet.com> <14799.49178.2354.77727@bitdiddle.concentric.net>
Message-ID: <39D0A363.2DE02593@interet.com>

Jeremy Hylton wrote:

> I don't understand what problem this is a practical solution to.

To recover from errors better by using YACC's built-in error
recovery features.  Maybe unifying the C and Java parsers.  I
admit I don't know how J-Python parses Python.

I kind of threw in my objection to tokenize.py which should be
combined with tokenizer.c.  Of course it is work which only
results in the same operation as before, but reduces the code
base.  Not a popular project.

> But in general, the most practical solution for parsing Python is
> probably to use the Python parser and the builtin parser module.  It
> already exists and seems to work just fine.

A very good point.  I am not 100% sure it is worth it.  But I
found the current parser unworkable for my project.

JimA



From bwarsaw at beopen.com  Tue Sep 26 16:43:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 26 Sep 2000 10:43:24 -0400 (EDT)
Subject: [Python-Dev] Python syntax checker ?
References: <200009201707.TAA07172@pandora.informatik.hu-berlin.de>
	<39CF596C.17BA4DC5@interet.com>
	<14799.24252.537090.326130@anthem.concentric.net>
	<39CF69D4.E3649C69@interet.com>
	<200009251536.RAA26375@pandora.informatik.hu-berlin.de>
	<39CFBA1C.3E05B760@interet.com>
	<14799.49178.2354.77727@bitdiddle.concentric.net>
	<39D0A363.2DE02593@interet.com>
Message-ID: <14800.46604.587756.479012@anthem.concentric.net>

>>>>> "JCA" == James C Ahlstrom <jim at interet.com> writes:

    JCA> To recover from errors better by using YACC's built-in error
    JCA> recovery features.  Maybe unifying the C and Java parsers.  I
    JCA> admit I don't know how J-Python parses Python.

It uses JavaCC.

http://www.metamata.com/javacc/

-Barry



From thomas at xs4all.net  Tue Sep 26 20:20:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 26 Sep 2000 20:20:53 +0200
Subject: [Python-Dev] [OT] ApacheCon 2000
Message-ID: <20000926202053.K20757@xs4all.nl>

I'm (off-topicly) wondering if anyone here is going to the Apache Conference
in London, october 23-25, and how I'm going to recognize them (My PythonLabs
shirt will probably not last more than a day, and I don't have any other
python-related shirts ;) 

I'm also wondering if anyone knows a halfway-decent hotel somewhat near the
conference site (Olympia Conference Centre, Kensington). I have a
reservation at the Hilton, but it's bloody expensive and damned hard to deal
with, over the phone. I don't mind the price (boss pays) but I'd think
they'd not treat potential customers like village idiots ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at beopen.com  Tue Sep 26 21:01:27 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 15:01:27 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
Message-ID: <14800.62087.617722.272109@bitdiddle.concentric.net>

We have tar balls and RPMs available on our private FTP site,
python.beopen.com.  If you have a chance to test these on your
platform in the next couple of hours, feedback would be appreciated.
We've tested on FreeBSD and RH and Mandrake Linux.

What we're most interested in hearing about is whether it builds
cleanly and runs the regression test.

The actual release will occur later today from pythonlabs.com.

Jeremy



From fdrake at beopen.com  Tue Sep 26 21:43:42 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 15:43:42 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > We have tar balls and RPMs available on our private FTP site,
 > python.beopen.com.  If you have a chance to test these on your
 > platform in the next couple of hours, feedback would be appreciated.
 > We've tested on FreeBSD and RH and Mandrake Linux.

  I've just built & tested on Caldera 2.3 on the SourceForge compile
farm, and am getting some failures.  If anyone who knows Caldera can
figure these out, that would be great (I'll turn them into proper bug
reports later).
  The failing tests are for fcntl, openpty, and pty.  Here's the
output of regrtest -v for those tests:

bash$ ./python -tt ../Lib/test/regrtest.py -v test_{fcntl,openpty,pty}
test_fcntl
test_fcntl
Status from fnctl with O_NONBLOCK:  0
struct.pack:  '\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'test test_fcntl crashed -- exceptions.IOError: [Errno 37] No locks available
Traceback (most recent call last):
  File "../Lib/test/regrtest.py", line 235, in runtest
    __import__(test, globals(), locals(), [])
  File "../Lib/test/test_fcntl.py", line 31, in ?
    rv = fcntl.fcntl(f.fileno(), FCNTL.F_SETLKW, lockdata)
IOError: [Errno 37] No locks available
test_openpty
test_openpty
Calling os.openpty()
test test_openpty crashed -- exceptions.OSError: [Errno 2] No such file or directory
Traceback (most recent call last):
  File "../Lib/test/regrtest.py", line 235, in runtest
    __import__(test, globals(), locals(), [])
  File "../Lib/test/test_openpty.py", line 9, in ?
    master, slave = os.openpty()
OSError: [Errno 2] No such file or directory
test_pty
test_pty
Calling master_open()
Got master_fd '5', slave_name '/dev/ttyp0'
Calling slave_open('/dev/ttyp0')
test test_pty skipped --  Pseudo-terminals (seemingly) not functional.
2 tests failed: test_fcntl test_openpty
1 test skipped: test_pty


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 26 22:05:13 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:05:13 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <004901c027f5$1d743640$766940d5@hagrid>

jeremy wrote:

> We have tar balls and RPMs available on our private FTP site,
> python.beopen.com.  If you have a chance to test these on your
> platform in the next couple of hours, feedback would be appreciated.
> We've tested on FreeBSD and RH and Mandrake Linux.

is the windows installer up to date?

I just grabbed it, only to get a "corrupt installation detected" message
box (okay, I confess: I do have a PythonWare distro installed, but may-
be you could use a slightly more polite message? ;-)

</F>




From tim_one at email.msn.com  Tue Sep 26 21:59:34 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 15:59:34 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDGHIAA.tim_one@email.msn.com>

[Jeremy Hylton]
> We have tar balls and RPMs available on our private FTP site,
> python.beopen.com.

I think he meant to add under /pub/tmp/.  In any case, that's where the
2.0b2 Windows installer is now:

    BeOpen-Python-2.0b2.exe
    5,667,334 bytes
    SHA digest:  4ec69734d9931f5b83b391b2a9606c2d4e793428

> If you have a chance to test these on your platform in the next
> couple of hours, feedback would be appreciated.  We've tested on
> FreeBSD and RH and Mandrake Linux.

Would also be cool if at least one person other than me tried the Windows
installer.  I usually pick on Guido for this (just as he used to pick on
me), but, alas, he's somewhere in transit mid-continent.

executives!-ly y'rs  - tim





From jeremy at beopen.com  Tue Sep 26 22:05:44 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 16:05:44 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <004901c027f5$1d743640$766940d5@hagrid>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <14801.408.372215.493355@bitdiddle.concentric.net>

>>>>> "FL" == Fredrik Lundh <effbot at telia.com> writes:

  FL> jeremy wrote:
  >> We have tar balls and RPMs available on our private FTP site,
  >> python.beopen.com.  If you have a chance to test these on your
  >> platform in the next couple of hours, feedback would be
  >> appreciated.  We've tested on FreeBSD and RH and Mandrake Linux.

  FL> is the windows installer up to date?

No.  Tim has not done the Windows installer yet.  It's coming...

  FL> I just grabbed it, only to get a "corrupt installation detected"
  FL> message box (okay, I confess: I do have a PythonWare distro
  FL> installed, but may- be you could use a slightly more polite
  FL> message? ;-)

Did you grab the 2.0b1 exe?  I would not be surprised if the one in
/pub/tmp did not work.  It's probably an old pre-release version of
the beta 1 Windows installer.

Jeremy





From tim_one at email.msn.com  Tue Sep 26 22:01:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:01:23 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDHHIAA.tim_one@email.msn.com>

[/F]
> is the windows installer up to date?
>
> I just grabbed it, only to get a "corrupt installation detected" message
> box (okay, I confess: I do have a PythonWare distro installed, but may-
> be you could use a slightly more polite message? ;-)

I'm pretty sure you grabbed it while the scp from my machine was still in
progress.  Try it again!  While BeOpen.com has no official policy toward
PythonWare, I think it's cool.





From tim_one at email.msn.com  Tue Sep 26 22:02:48 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:02:48 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.408.372215.493355@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEDIHIAA.tim_one@email.msn.com>

All the Windows installers under /pub/tmp/ should work fine.  Although only
2.0b2 should be of any interest to anyone anymore.





From fdrake at beopen.com  Tue Sep 26 22:05:19 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 16:05:19 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
Message-ID: <14801.383.799094.8428@cj42289-a.reston1.va.home.com>

Fred L. Drake, Jr. writes:
 >   I've just built & tested on Caldera 2.3 on the SourceForge compile
 > farm, and am getting some failures.  If anyone who knows Caldera can
 > figure these out, that would be great (I'll turn them into proper bug
 > reports later).
 >   The failing tests are for fcntl, openpty, and pty.  Here's the
 > output of regrtest -v for those tests:

  These same tests fail in what appears to be the same way on SuSE 6.3
(using the SourceForge compile farm).  Does anyone know the vagaries
of Linux libc versions enough to tell if this is a libc5/glibc6
difference?  Or a difference in kernel versions?
  On to Slackware...


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 26 22:08:09 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:08:09 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid>
Message-ID: <000001c027f7$e0915480$766940d5@hagrid>

I wrote:
> I just grabbed it, only to get a "corrupt installation detected" message
> box (okay, I confess: I do have a PythonWare distro installed, but may-
> be you could use a slightly more polite message? ;-)

nevermind; the size of the file keeps changing on the site, so
I guess someone's uploading it (over and over again?)

</F>




From nascheme at enme.ucalgary.ca  Tue Sep 26 22:16:10 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Tue, 26 Sep 2000 14:16:10 -0600
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.383.799094.8428@cj42289-a.reston1.va.home.com>; from Fred L. Drake, Jr. on Tue, Sep 26, 2000 at 04:05:19PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com> <14801.383.799094.8428@cj42289-a.reston1.va.home.com>
Message-ID: <20000926141610.A6557@keymaster.enme.ucalgary.ca>

On Tue, Sep 26, 2000 at 04:05:19PM -0400, Fred L. Drake, Jr. wrote:
>   These same tests fail in what appears to be the same way on SuSE 6.3
> (using the SourceForge compile farm).  Does anyone know the vagaries
> of Linux libc versions enough to tell if this is a libc5/glibc6
> difference?  Or a difference in kernel versions?

I don't know much but having the output from "uname -a" and "ldd python"
could be helpful (ie. which kernel and which libc).

  Neil



From tim_one at email.msn.com  Tue Sep 26 22:17:52 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:17:52 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <000001c027f7$e0915480$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDJHIAA.tim_one@email.msn.com>

> nevermind; the size of the file keeps changing on the site, so
> I guess someone's uploading it (over and over again?)

No, I uploaded it exactly once, but it took over an hour to complete
uploading.  That's done now.  If it *still* fails for you, then gripe.  You
simply jumped the gun by grabbing it before anyone said it was ready.





From fdrake at beopen.com  Tue Sep 26 22:32:21 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 16:32:21 -0400 (EDT)
Subject: [Python-Dev] 2.0b2 on Slackware 7.0
Message-ID: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>

  I just built and tested 2.0b2 on Slackware 7.0, and found that
threads failed miserably.  I got the message:

pthread_cond_wait: Interrupted system call

over & over (*hundreds* of times before I killed it) during one of the
tests (test_fork1.py? it scrolled out of the scollback buffer, 2000
lines).  If I configure it --without-threads it works great.  Unless
you need threads.

uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001c000)
	libdl.so.2 => /lib/libdl.so.2 (0x40056000)
	libutil.so.1 => /lib/libutil.so.1 (0x4005a000)
	libm.so.6 => /lib/libm.so.6 (0x4005d000)
	libc.so.6 => /lib/libc.so.6 (0x4007a000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

  If anyone has any ideas, please send them along!  I'll turn this
into a real bug report later.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Sep 26 22:48:49 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:48:49 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid> <000001c027f7$e0915480$766940d5@hagrid>
Message-ID: <005901c027fb$2ecf8380$766940d5@hagrid>

> nevermind; the size of the file keeps changing on the site, so
> I guess someone's uploading it (over and over again?)

heh.  just discovered that my ISP has introduced a new
policy: if you send stupid messages, we'll knock you off
the net for 30 minutes...

anyway, I've now downloaded the installer, and it works
pretty well...

:::

just one weird thing:

according to dir, I have 41 megs on my C: disk before
running the installer...

according to the installer, I have 22.3 megs, but Python
only requires 18.3 megs, so it should be okay...

but a little later, the installer claims that it needs an
additional 21.8 megs free space...  if I click ignore, the
installer proceeds (but boy, is it slow or what? ;-)

after installation (but before reboot) (reboot!?), I have
19.5 megs free.

hmm...

after uninstalling, I have 40.7 megs free.  there's still
some crud in the Python20\Tools\idle directory.

after removing that stuff, I have 40.8 megs free.

close enough ;-)

on a second run, it claims that I have 21.3 megs free, and
that the installer needs another 22.8 megs to complete in-
stallation.

:::

without rebooting, IDLE refuses to start, but the console
window works fine...

</F>




From martin at loewis.home.cs.tu-berlin.de  Tue Sep 26 22:34:41 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 26 Sep 2000 22:34:41 +0200
Subject: [Python-Dev] Bogus SAX test case
Message-ID: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>

test_sax.py has the test case test_xmlgen_ns, which reads

ns_uri = "http://www.python.org/xml-ns/saxtest/"

    gen.startDocument()
    gen.startPrefixMapping("ns1", ns_uri)
    gen.startElementNS((ns_uri, "doc"), "ns:doc", {})
    gen.endElementNS((ns_uri, "doc"), "ns:doc")
    gen.endPrefixMapping("ns1")
    gen.endDocument()

Translating that to XML, it should look like

<?xml version="1.0" encoding="iso-8859-1"?>
<ns:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"><ns:doc/>

(or, alternatively, the element could just be empty). Is that the XML
that would produce above sequence of SAX events?

It seems to me that this XML is ill-formed, the namespace prefix ns is
not defined here. Is that analysis correct? Furthermore, the test
checks whether the generator produces

<?xml version="1.0" encoding="iso-8859-1"?>
<ns1:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"></ns1:doc>

It appears that the expected output is bogus; I'd rather expect to get
the original document back.

I noticed this because in PyXML, XMLGenerator *would* produce ns:doc
on output, so the test case broke. I have now changed PyXML to follow
Python 2.0b2 here.

My proposal would be to correct the test case to pass "ns1:doc" as the
qname, and to correct the generator to output the qname if that was
provided by the reader.

Comments?

Regards,
Martin



From effbot at telia.com  Tue Sep 26 22:57:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 22:57:11 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <004901c027f5$1d743640$766940d5@hagrid> <000001c027f7$e0915480$766940d5@hagrid> <005901c027fb$2ecf8380$766940d5@hagrid>
Message-ID: <000a01c027fc$6942c800$766940d5@hagrid>

I wrote:
> without rebooting, IDLE refuses to start, but the console
> window works fine...

fwiw, rebooting didn't help.

</F>




From thomas at xs4all.net  Tue Sep 26 22:51:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 26 Sep 2000 22:51:47 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 03:43:42PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
Message-ID: <20000926225146.L20757@xs4all.nl>

On Tue, Sep 26, 2000 at 03:43:42PM -0400, Fred L. Drake, Jr. wrote:

>   The failing tests are for fcntl, openpty, and pty.  Here's the
> output of regrtest -v for those tests:

> bash$ ./python -tt ../Lib/test/regrtest.py -v test_{fcntl,openpty,pty}
> test_fcntl
> test_fcntl
> Status from fnctl with O_NONBLOCK:  0
> struct.pack:  '\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000'test test_fcntl crashed -- exceptions.IOError: [Errno 37] No locks available
> Traceback (most recent call last):
>   File "../Lib/test/regrtest.py", line 235, in runtest
>     __import__(test, globals(), locals(), [])
>   File "../Lib/test/test_fcntl.py", line 31, in ?
>     rv = fcntl.fcntl(f.fileno(), FCNTL.F_SETLKW, lockdata)
> IOError: [Errno 37] No locks available

Looks like your /tmp directory doesn't support locks. Perhaps it's some kind
of RAMdisk ? See if you can find a 'normal' filesystem (preferably not NFS)
where you have write-permission, and change the /tmp/delete-me path in
test_fcntl to that.

> test_openpty
> test_openpty
> Calling os.openpty()
> test test_openpty crashed -- exceptions.OSError: [Errno 2] No such file or directory
> Traceback (most recent call last):
>   File "../Lib/test/regrtest.py", line 235, in runtest
>     __import__(test, globals(), locals(), [])
>   File "../Lib/test/test_openpty.py", line 9, in ?
>     master, slave = os.openpty()
> OSError: [Errno 2] No such file or directory

If you're running glibc (which is pretty likely, because IIRC libc5 didn't
have an openpty() call, so test_openpty should be skipped) openpty() is
defined as a library routine that tries to open /dev/ptmx. That's the kernel
support for Unix98 pty's. However, it's possible that support is turned off
in the default Caldera kernel, or perhaps /dev/ptmx does not exist (what
kernel are you running, btw ?) /dev/ptmx was new in 2.1.x, so if you're
running 2.0 kernels, that might be the problem.

I'm not sure if you're supposed to get that error, though. I've never tested
glibc's openpty() support on a system that had it turned off, though I have
seen *almost* exactly the same error message from BSDI's openpty() call,
which works by sequentially trying to open each pty, until it finds one that
works. 

> test_pty
> test_pty
> Calling master_open()
> Got master_fd '5', slave_name '/dev/ttyp0'
> Calling slave_open('/dev/ttyp0')
> test test_pty skipped --  Pseudo-terminals (seemingly) not functional.
> 2 tests failed: test_fcntl test_openpty
> 1 test skipped: test_pty

The 'normal' procedure for opening pty's is to open the master, and if that
works, the pty is functional... But it looks like you could open the master,
but not the slave. Possibly permission problems, or a messed up /dev
directory. Do you know if /dev/ttyp0 was in use while you were running the
test ? (it's pretty likely it was, since it's usually the first pty on the
search list.) What might be happening here is that the master is openable,
for some reason, even if the pty/tty pair is already in use, but the slave
isn't openable. That would mean that the pty library is basically
nonfunctional, on those platforms, and it's definately not the behaviour
I've seen on other platforms :P And this wouldn't be a new thing, because
the pty module has always worked this way.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Tue Sep 26 22:56:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 16:56:33 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <005901c027fb$2ecf8380$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDLHIAA.tim_one@email.msn.com>

[Fredrik Lundh]
> ...
> just one weird thing:
>
> according to dir, I have 41 megs on my C: disk before
> running the installer...
>
> according to the installer, I have 22.3 megs,

This is the Wise "Check free disk space" "Script item".  Now you know as
much about it as I do <wink>.

> but Python only requires 18.3 megs, so it should be okay...

Noting that 22.3 + 18.3 ~= 41.  So it sounds like Wise's "Disk space
remaining" is trying to tell you how much space you'll have left *after* the
install.  Indeed, if you try unchecking various items in the "Select
Components" dialog, you should see that the "Disk space remaining" changes
accordingly.

> but a little later, the installer claims that it needs an
> additional 21.8 megs free space...  if I click ignore, the
> installer proceeds (but boy, is it slow or what? ;-)

Win95?  Which version?  The installer runs very quickly for me (Win98).
I've never tried it without plenty of free disk space, though; maybe it
needs temp space for unpacking?  Dunno.

> after installation (but before reboot) (reboot!?), I have
> 19.5 megs free.

It's unclear here whether the installer did or did not *say* it wanted you
to reboot.  It should ask for a reboot if and only if it needs to update an
MS shared DLL (the installer ships with MSVCRT.DLL and MSCVICRT.DLL).

> hmm...
>
> after uninstalling, I have 40.7 megs free.  there's still
> some crud in the Python20\Tools\idle directory.

Like what?  .pyc files, perhaps?  Like most uninstallers, it will not delete
files it didn't install, so all .pyc files (or anything else) generated
after the install won't be touched.

> after removing that stuff, I have 40.8 megs free.
>
> close enough ;-)
>
> on a second run, it claims that I have 21.3 megs free, and
> that the installer needs another 22.8 megs to complete in-
> stallation.

Noted.

> without rebooting, IDLE refuses to start, but the console
> window works fine...

If it told you to reboot and you didn't, I don't really care what happens if
you ignore the instructions <wink>.  Does IDLE start after you reboot?

thanks-for-the-pain!-ly y'rs  - tim





From tim_one at email.msn.com  Tue Sep 26 23:02:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 17:02:14 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <000a01c027fc$6942c800$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com>

[/F]
> I wrote:
> > without rebooting, IDLE refuses to start, but the console
> > window works fine...
>
> fwiw, rebooting didn't help.

So let's start playing bug report:  Which version of Windows?  By what means
did you attempt to start IDLE?  What does "refuses to start" mean (error
msg, system freeze, hourglass that never goes away, pops up & vanishes,
nothing visible happens at all, ...)?  Does Tkinter._test() work from a
DOS-box Python?  Do you have magical Tcl/Tk envars set for your own
development work?  Stuff like that.





From effbot at telia.com  Tue Sep 26 23:30:09 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 26 Sep 2000 23:30:09 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com>
Message-ID: <001b01c02800$f3996000$766940d5@hagrid>

tim wrote,
> > fwiw, rebooting didn't help.

> So let's start playing bug report:

oh, I've figured it out (what did you expect ;-). read on.

> Which version of Windows?

Windows 95 OSR 2.

> By what means did you attempt to start IDLE?

> What does "refuses to start" mean (error msg, system freeze,
> hourglass that never goes away, pops up & vanishes, nothing
> visible happens at all, ...)?

idle never appears.

> Does Tkinter._test() work from a DOS-box Python?

yes -- but it hangs if I close it with the "x" button (same
problem as I've reported earlier).

> Do you have magical Tcl/Tk envars set for your own
> development work?

bingo!

(a global PYTHONPATH setting also resulted in some interesting
behaviour... on my wishlist for 2.1: an option telling Python to
ignore all PYTHON* environment variables...)

</F>




From tim_one at email.msn.com  Tue Sep 26 23:50:54 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 17:50:54 -0400
Subject: [Python-Dev] Crisis aversive
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>

I'm going to take a nap now.  If there's a Windows crisis for the duration,
mail pleas for urgent assistance to bwarsaw at beopen.com -- especially if it
involves interactions between a Python script running as an NT service and
python-mode.el under NT Emacs.  Barry *loves* those!

Back online in a few hours.

sometimes-when-you-hit-the-wall-you-stick-ly y'rs  - tim





From fdrake at beopen.com  Tue Sep 26 23:50:16 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 17:50:16 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <20000926141610.A6557@keymaster.enme.ucalgary.ca>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14800.64622.961057.204969@cj42289-a.reston1.va.home.com>
	<14801.383.799094.8428@cj42289-a.reston1.va.home.com>
	<20000926141610.A6557@keymaster.enme.ucalgary.ca>
Message-ID: <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>

Neil Schemenauer writes:
 > I don't know much but having the output from "uname -a" and "ldd python"
 > could be helpful (ie. which kernel and which libc).

Under SuSE 6.3, uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001d000)
	libpthread.so.0 => /lib/libpthread.so.0 (0x4005c000)
	libdl.so.2 => /lib/libdl.so.2 (0x4006e000)
	libutil.so.1 => /lib/libutil.so.1 (0x40071000)
	libm.so.6 => /lib/libm.so.6 (0x40075000)
	libc.so.6 => /lib/libc.so.6 (0x40092000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

Under Caldera 2.3, uname -a says:
Linux linux1.compile.sourceforge.net 2.2.14-5.0.14smp #1 SMP Sun Mar 26 13:03:52 PST 2000 i686 unknown

ldd ./python says:
	libdb.so.3 => /lib/libdb.so.3 (0x4001a000)
	libpthread.so.0 => /lib/libpthread.so.0 (0x40055000)
	libdl.so.2 => /lib/libdl.so.2 (0x40066000)
	libutil.so.1 => /lib/libutil.so.1 (0x4006a000)
	libm.so.6 => /lib/libm.so.6 (0x4006d000)
	libc.so.6 => /lib/libc.so.6 (0x4008a000)
	/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

  Now, it may be that something strange is going on since these are
the "virtual environments" on SourceForge.  I'm not sure these are
really the same thing as running those systems.  I'm looking at the
script to start SuSE; there's nothing really there but a chroot call;
perhaps there's a kernel/library mismatch?
  I'll have to see ask about how these are supposed to work a little
more; kernel/libc mismatches could be a real problem in this
environment.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Tue Sep 26 23:52:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 26 Sep 2000 17:52:59 -0400 (EDT)
Subject: [Python-Dev] Crisis aversive
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
Message-ID: <14801.6843.516029.921562@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > sometimes-when-you-hit-the-wall-you-stick-ly y'rs  - tim

  I told you to take off that Velco body armor!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Tue Sep 26 23:57:22 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 26 Sep 2000 17:57:22 -0400 (EDT)
Subject: [Python-Dev] Crisis aversive
References: <LNBBLJKPBEHFEDALKOLCGEEAHIAA.tim_one@email.msn.com>
Message-ID: <14801.7106.388711.967339@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> I'm going to take a nap now.  If there's a Windows crisis for
    TP> the duration, mail pleas for urgent assistance to
    TP> bwarsaw at beopen.com -- especially if it involves interactions
    TP> between a Python script running as an NT service and
    TP> python-mode.el under NT Emacs.  Barry *loves* those!

Indeed!  I especially love these because I don't have a working
Windows system at the moment, so every such bug just gets classified
as non-reproducible.

or-"works-for-me"-about-as-well-as-if-i-did-have-windows-ly y'rs,
-Barry



From tommy at ilm.com  Wed Sep 27 00:55:02 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Tue, 26 Sep 2000 15:55:02 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14800.62087.617722.272109@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
Message-ID: <14801.10496.986326.537462@mace.lucasdigital.com>

Hi All,

Jeremy asked me to send this report (which I originally sent just to
him) along to the rest of python-dev, so here ya go:

------------%< snip %<----------------------%< snip %<------------

Hey Jeremy,

Configured (--without-gcc), made and ran just fine on my IRIX6.5 O2.
The "make test" output indicated a lot of skipped modules since I
didn't do any Setup.in modifications before making everything, and the 
only error came from test_unicodedata:

test test_unicodedata failed -- Writing: 'e052289ecef97fc89c794cf663cb74a64631d34e', expected: 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'

Nothing else that ran had any errors.  Here's the final output:

77 tests OK.
1 test failed: test_unicodedata
24 tests skipped: test_al test_audioop test_cd test_cl test_crypt test_dbm test_dl test_gdbm test_gl test_gzip test_imageop test_imgfile test_linuxaudiodev test_minidom test_nis test_pty test_pyexpat test_rgbimg test_sax test_sunaudiodev test_timing test_winreg test_winsound test_zlib

is there anything I can do to help debug the unicodedata failure?

------------%< snip %<----------------------%< snip %<------------

Jeremy Hylton writes:
| We have tar balls and RPMs available on our private FTP site,
| python.beopen.com.  If you have a chance to test these on your
| platform in the next couple of hours, feedback would be appreciated.
| We've tested on FreeBSD and RH and Mandrake Linux.
| 
| What we're most interested in hearing about is whether it builds
| cleanly and runs the regression test.
| 
| The actual release will occur later today from pythonlabs.com.
| 
| Jeremy
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://www.python.org/mailman/listinfo/python-dev



From jeremy at beopen.com  Wed Sep 27 01:07:03 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 19:07:03 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.10496.986326.537462@mace.lucasdigital.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14801.10496.986326.537462@mace.lucasdigital.com>
Message-ID: <14801.11287.963056.896941@bitdiddle.concentric.net>

I was just talking with Guido who wondered if it might simply be an
optmizer bug with the IRIX compiler.  Does the same problem occur with
optimization turned off?

Jeremy



From tommy at ilm.com  Wed Sep 27 02:01:54 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Tue, 26 Sep 2000 17:01:54 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.11287.963056.896941@bitdiddle.concentric.net>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14801.10496.986326.537462@mace.lucasdigital.com>
	<14801.11287.963056.896941@bitdiddle.concentric.net>
Message-ID: <14801.14476.284150.194816@mace.lucasdigital.com>

yes, it does.  I changed this line in the toplevel Makefile:

OPT =	-O -OPT:Olimit=0

to

OPT =

and saw no optimization going on during compiling (yes, I made clean
first) but I got the exact same result from test_unicodedata.


Jeremy Hylton writes:
| I was just talking with Guido who wondered if it might simply be an
| optmizer bug with the IRIX compiler.  Does the same problem occur with
| optimization turned off?
| 
| Jeremy



From gward at python.net  Wed Sep 27 02:11:07 2000
From: gward at python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:11:07 -0400
Subject: [Python-Dev] Stupid distutils bug
Message-ID: <20000926201107.A1179@beelzebub>

No, I mean *really* stupid.  So stupid that I nearly fell out of my
chair with embarassment when I saw Thomas Heller's report of it, because
I released Distutils 0.9.3 *before* reading my mail.  D'oh!

Anyways, this is such a colossally stupid bug that I'm *glad* 2.0b2
hasn't gone out yet: it gives me a chance to checkin the (3-line) fix.
Here's what I plan to do:
  * tag distutils-0_9_3 (ie. last bit of bureaucracy for the
    broken, about-to-be-superseded release)
  * checkin my fix
  * release Distutils 0.9.4 (with this 3-line fix and *nothing* more)
  * tag distutils-0_9_4
  * calmly sit back and wait for Jeremy and Tim to flay me alive

Egg-on-face, paper-bag-on-head, etc. etc...

        Greg

PS. be sure to cc me: I'm doing this from home, but my python-dev
subscription goes to work.

-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From jeremy at beopen.com  Wed Sep 27 02:25:53 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 20:25:53 -0400 (EDT)
Subject: [Python-Dev] Stupid distutils bug
In-Reply-To: <20000926201107.A1179@beelzebub>
References: <20000926201107.A1179@beelzebub>
Message-ID: <14801.16017.841176.232036@bitdiddle.concentric.net>

Greg,

The distribution tarball was cut this afternoon around 2pm.  It's way
to late to change anything in it.  Sorry.

Jeremy



From gward at python.net  Wed Sep 27 02:22:32 2000
From: gward at python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:22:32 -0400
Subject: [Python-Dev] Stupid distutils bug
In-Reply-To: <14801.16017.841176.232036@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Sep 26, 2000 at 08:25:53PM -0400
References: <20000926201107.A1179@beelzebub> <14801.16017.841176.232036@bitdiddle.concentric.net>
Message-ID: <20000926202232.D975@beelzebub>

On 26 September 2000, Jeremy Hylton said:
> The distribution tarball was cut this afternoon around 2pm.  It's way
> to late to change anything in it.  Sorry.

!@$!#!  I didn't see anything on python.org or pythonlabs.com, so I
assumed it wasn't done yet.  Oh well, Distutils 0.9.4 will go out
shortly anyways.  I'll just go off in a corner and castigate myself
mercilessly.  Arghgh!

        Greg
-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From jeremy at beopen.com  Wed Sep 27 02:33:22 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 26 Sep 2000 20:33:22 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.14476.284150.194816@mace.lucasdigital.com>
References: <14800.62087.617722.272109@bitdiddle.concentric.net>
	<14801.10496.986326.537462@mace.lucasdigital.com>
	<14801.11287.963056.896941@bitdiddle.concentric.net>
	<14801.14476.284150.194816@mace.lucasdigital.com>
Message-ID: <14801.16466.928385.529906@bitdiddle.concentric.net>

Sounded too easy, didn't it?  We'll just have to wait for MAL or /F to
followup.

Jeremy



From tim_one at email.msn.com  Wed Sep 27 02:34:51 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 20:34:51 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.10496.986326.537462@mace.lucasdigital.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>

[Victor the Cleaner]
> Jeremy asked me to send this report (which I originally sent just to
> him) along to the rest of python-dev, so here ya go:

Bugs reports should go to SourceForge, else as often as not they'll got
lost.

> ------------%< snip %<----------------------%< snip %<------------
>
> Hey Jeremy,
>
> Configured (--without-gcc), made and ran just fine on my IRIX6.5 O2.
> The "make test" output indicated a lot of skipped modules since I
> didn't do any Setup.in modifications before making everything, and the
> only error came from test_unicodedata:
>
> test test_unicodedata failed -- Writing:
> 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'

The problem appears to be that the test uses the secret "unicode-internal"
encoding, which is dependent upon the big/little-endianess of your platform.
I can reproduce your flawed hash exactly on my platform by replacing this
line:

        h.update(u''.join(data).encode('unicode-internal'))

in test_unicodedata.py's test_methods() with this block:

        import array
        xxx = array.array("H", map(ord, u''.join(data)))
        xxx.byteswap()
        h.update(xxx)

When you do this from a shell:

>>> u"A".encode("unicode-internal")
'A\000'
>>>

I bet you get

'\000A'

Right?





From tim_one at email.msn.com  Wed Sep 27 02:39:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 26 Sep 2000 20:39:49 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.16466.928385.529906@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEEHHIAA.tim_one@email.msn.com>

> Sounded too easy, didn't it?

Not at all:  an optimization bug on SGI is the *usual* outcome <0.5 wink>!

> We'll just have to wait for MAL or /F to followup.

See my earlier mail; the cause is thoroughly understood; it actually means
Unicode is working fine on his machine; but I don't know enough about
Unicode encodings to know how to rewrite the test in a portable way.





From akuchlin at cnri.reston.va.us  Wed Sep 27 02:43:24 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Tue, 26 Sep 2000 20:43:24 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <001b01c02800$f3996000$766940d5@hagrid>; from effbot@telia.com on Tue, Sep 26, 2000 at 11:30:09PM +0200
References: <LNBBLJKPBEHFEDALKOLCOEDMHIAA.tim_one@email.msn.com> <001b01c02800$f3996000$766940d5@hagrid>
Message-ID: <20000926204324.A20476@newcnri.cnri.reston.va.us>

On Tue, Sep 26, 2000 at 11:30:09PM +0200, Fredrik Lundh wrote:
>on my wishlist for 2.1: an option telling Python to
>ignore all PYTHON* environment variables...)

You could just add an environment variable that did this... dohhh!

--am"Raymound Smullyan"k




From greg at cosc.canterbury.ac.nz  Wed Sep 27 02:51:05 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 27 Sep 2000 12:51:05 +1200 (NZST)
Subject: [Python-Dev] Python syntax checker ?
In-Reply-To: <Pine.SOL.4.21.0009261309240.22922-100000@yellow.csi.cam.ac.uk>
Message-ID: <200009270051.MAA23788@s454.cosc.canterbury.ac.nz>

By the way, one of the examples that comes with my
Plex module is an almost-complete Python scanner.
Just thought I'd mention it in case it would help
anyone.

http://www.cosc.canterbury.ac.nz/~greg/python/Plex

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From gward at python.net  Wed Sep 27 02:53:12 2000
From: gward at python.net (Greg Ward)
Date: Tue, 26 Sep 2000 20:53:12 -0400
Subject: [Python-Dev] Distutils 1.0 code freeze: Oct 1
Message-ID: <20000926205312.A1470@beelzebub>

Considering the following schedule of events:

  Oct  4: I go out of town (away from email, off the net, etc.)
  Oct 10: planned release of Python 2.0
  Oct 12: I'm back in town, ready to hack! (and wondering why it's
          so quiet around here...)

the Distutils 1.0 release will go out October 1 or 2.  I don't need
quite as much code freeze time as the full Python release, but let's put 
it this way: if there are features you want added to the Distutils that
I don't already know about, forget about it.  Changes currently under
consideration:

  * Rene Liebscher's rearrangement of the CCompiler classes; most
    of this is just reducing the amount of code, but it does
    add some minor features, so it's under consideration.

  * making byte-compilation more flexible: should be able to
    generate both .pyc and .pyo files, and should be able to
    do it at build time or install time (developer's and packager's
    discretion)

If you know about any outstanding Distutils bugs, please tell me *now*.
Put 'em in the SourceForge bug database if you're wondering why I
haven't fixed them yet -- they might have gotten lost, I might not know
about 'em, etc.  If you're not sure, put it in SourceForge.

Stuff that will definitely have to wait until after 1.0:

  * a "test" command (standard test framework for Python modules)

  * finishing the "config" command (auto-configuration)

  * installing package meta-data, to support "what *do* I have
    installed, anyways?" queries, uninstallation, upgrades, etc.

Blue-sky projects:

  * standard documentation processing

  * intra-module dependencies

        Greg
-- 
Greg Ward                                      gward at python.net
http://starship.python.net/~gward/



From dkwolfe at pacbell.net  Wed Sep 27 07:15:52 2000
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Tue, 26 Sep 2000 22:15:52 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1J00FEC58TA3@mta6.snfc21.pbi.net>

Hi Marc-Andre,

Regarding:

>You could try to enable the macro at the top of unicodectype.c:
> 
>#if defined(macintosh) || defined(MS_WIN64)
>/*XXX This was required to avoid a compiler error for an early Win64
> * cross-compiler that was used for the port to Win64. When the platform is
> * released the MS_WIN64 inclusion here should no longer be necessary.
> */
>/* This probably needs to be defined for some other compilers too. It 
>breaks the
>** 5000-label switch statement up into switches with around 1000 cases each.
>*/
>#define BREAK_SWITCH_UP return 1; } switch (ch) {
>#else
>#define BREAK_SWITCH_UP /* nothing */
>#endif

I've tested it with the BREAK_SWITCH_UP to be true and it fixes the 
problem - same as using the -traditional-cpp.  However, before we commit 
this change I need to see if they are planning on fixing it... remeber 
this Mac OS X is Beta software.... :-)

>If it does compile with the work-around enabled, please
>give us a set of defines which identify the compiler and
>platform so we can enable it per default for your setup.

Auto-make is making me nuts... it's a long way from a GUI for this poor 
old mac guy.  I'll see what I can do.. stay tuned. ;-)

- Dan



From tim_one at email.msn.com  Wed Sep 27 07:39:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 01:39:35 -0400
Subject: [Python-Dev] FW: regarding the Python Developer posting...
In-Reply-To: <0G1J00FEC58TA3@mta6.snfc21.pbi.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEFFHIAA.tim_one@email.msn.com>

[about the big switch in unicodectype.c]

Dan, I'll suggest again that you try working from the current CVS tree
instead.  The giant switch stmt doesn't even exist anymore!  Few developers
are going to volunteer their time to help with code that's already been
replaced.  Talk to Steven Majewski, too -- he's also keen to see this work
on Macs, and knows a lot about Python internals.





From dkwolfe at pacbell.net  Wed Sep 27 09:02:00 2000
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Wed, 27 Sep 2000 00:02:00 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1J0028SA6KS4@mta5.snfc21.pbi.net>

>>[about the big switch in unicodectype.c]
>
>[Tim: use the current CVS tree instead... code's been replace...]

duh! gotta read them archives before seeing following up on an request... 
can't trust the hyper-active Python development team with a code 
freeze.... <wink>

I'm happy to report that it now compiles correctly without a 
-traditional-cpp flag.

Unfortuantely, test_re.py now seg faults.... which is caused by 
test_sre.py... in particular the following:

src/Lib/test/test_sre.py

if verbose:
    print 'Test engine limitations'

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
#test(r"""sre.match(r'(x)*', 50000*'x').span()""",
#   (0, 50000), RuntimeError)
#test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)
#test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)


test_unicodedata fails... same endian problem as SGI...
test_format fails... looks like a problem with the underlying C code.

Here's the config instructions for Mac OS X Public Beta:

Building Python 2.0b1 + CVS
9/26/2000
Dan Wolfe

./configure -with-threads -with-dyld -with-suffix=.exe

change in src/config.h:

/* Define if you have POSIX threads */
#define _POSIX_THREADS 1

to 

/* #define _POSIX_THREADS 1 */

change in src/Makefile

# Compiler options passed to subordinate makes
OPT=		-g -O2 -OPT:Olimit=0

to

OPT=		-g -O2

comment out the following in src/Lib/test/test_sre.py

if verbose:
    print 'Test engine limitations'

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
#test(r"""sre.match(r'(x)*', 50000*'x').span()""",
#   (0, 50000), RuntimeError)
#test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)
#test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
#     (0, 50001), RuntimeError)


After install, manually go into /usr/local/bin strip the .exe off the 
installed files.


- Dan






From trentm at ActiveState.com  Wed Sep 27 09:32:33 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 27 Sep 2000 00:32:33 -0700
Subject: [Python-Dev] WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <200009270706.AAA21107@slayer.i.sourceforge.net>; from tmick@users.sourceforge.net on Wed, Sep 27, 2000 at 12:06:06AM -0700
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
Message-ID: <20000927003233.C19872@ActiveState.com>

I was playing with a different SourceForge project and I screwed up my
CVSROOT (used Python's instead). Sorry SOrry!

How do I undo this cleanly? I could 'cvs remove' the README.txt file but that
would still leave the top-level 'black/' turd right? Do the SourceForge admin
guys have to manually kill the 'black' directory in the repository?


or-failing-that-can-my--pet-project-make-it-into-python-2.0-<weak-smile>-ly
yours,
Trent



On Wed, Sep 27, 2000 at 12:06:06AM -0700, Trent Mick wrote:
> Update of /cvsroot/python/black
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv20977
> 
> Log Message:
> first import into CVS
> 
> Status:
> 
> Vendor Tag:	vendor
> Release Tags:	start
> 		
> N black/README.txt
> 
> No conflicts created by this import
> 
> 
> ***** Bogus filespec: -
> ***** Bogus filespec: Imported
> ***** Bogus filespec: sources
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://www.python.org/mailman/listinfo/python-checkins

-- 
Trent Mick
TrentM at ActiveState.com



From effbot at telia.com  Wed Sep 27 10:06:44 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 10:06:44 +0200
Subject: [Python-Dev] FW: regarding the Python Developer posting...
References: <0G1J0028SA6KS4@mta5.snfc21.pbi.net>
Message-ID: <000c01c02859$e1502420$766940d5@hagrid>

dan wrote:
> >[Tim: use the current CVS tree instead... code's been replace...]
> 
> duh! gotta read them archives before seeing following up on an request... 
> can't trust the hyper-active Python development team with a code 
> freeze.... <wink>

heh.  your bug report was the main reason for getting this change
into 2.0b2, and we completely forgot to tell you about it...

> Unfortuantely, test_re.py now seg faults.... which is caused by 
> test_sre.py... in particular the following:
> 
> src/Lib/test/test_sre.py
> 
> if verbose:
>     print 'Test engine limitations'
> 
> # Try nasty case that overflows the straightforward recursive
> # implementation of repeated groups.
> #test(r"""sre.match(r'(x)*', 50000*'x').span()""",
> #   (0, 50000), RuntimeError)
> #test(r"""sre.match(r'(x)*y', 50000*'x'+'y').span()""",
> #     (0, 50001), RuntimeError)
> #test(r"""sre.match(r'(x)*?y', 50000*'x'+'y').span()""",
> #     (0, 50001), RuntimeError)

umm.  I assume it bombs if you uncomment those lines, right?

you could try adding a Mac OS clause to the recursion limit stuff
in Modules/_sre.c:

#if !defined(USE_STACKCHECK)
#if defined(...whatever's needed to detect Max OS X...)
#define USE_RECURSION_LIMIT 5000
#elif defined(MS_WIN64) || defined(__LP64__) || defined(_LP64)
/* require smaller recursion limit for a number of 64-bit platforms:
   Win64 (MS_WIN64), Linux64 (__LP64__), Monterey (64-bit AIX) (_LP64) */
/* FIXME: maybe the limit should be 40000 / sizeof(void*) ? */
#define USE_RECURSION_LIMIT 7500
#else
#define USE_RECURSION_LIMIT 10000
#endif
#endif

replace "...whatever...", and try larger values than 5000 (or smaller,
if necessary.  10000 is clearly too large for your platform).

(alternatively, you can increase the stack size.  maybe it's very small
by default?)

</F>




From larsga at garshol.priv.no  Wed Sep 27 10:12:45 2000
From: larsga at garshol.priv.no (Lars Marius Garshol)
Date: 27 Sep 2000 10:12:45 +0200
Subject: [Python-Dev] Bogus SAX test case
In-Reply-To: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>
References: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de>
Message-ID: <m3hf72uubm.fsf@lambda.garshol.priv.no>

* Martin v. Loewis
| 
| <?xml version="1.0" encoding="iso-8859-1"?>
| <ns:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"><ns:doc/>
| 
| (or, alternatively, the element could just be empty). Is that the
| XML that would produce above sequence of SAX events?

Nope, it's not.  No XML document could produce that particular
sequence of events.
 
| It seems to me that this XML is ill-formed, the namespace prefix ns
| is not defined here. Is that analysis correct? 

Not entirely.  The XML is perfectly well-formed, but it's not
namespace-compliant.

| Furthermore, the test checks whether the generator produces
| 
| <?xml version="1.0" encoding="iso-8859-1"?>
| <ns1:doc xmlns:ns1="http://www.python.org/xml-ns/saxtest/"></ns1:doc>
| 
| It appears that the expected output is bogus; I'd rather expect to get
| the original document back.

What original document? :-)
 
| My proposal would be to correct the test case to pass "ns1:doc" as
| the qname, 

I see that as being the best fix, and have now committed it.

| and to correct the generator to output the qname if that was
| provided by the reader.

We could do that, but the namespace name and the qname are supposed to
be equivalent in any case, so I don't see any reason to change it.
One problem with making that change is that it no longer becomes
possible to roundtrip XML -> pyexpat -> SAX -> xmlgen -> XML because
pyexpat does not provide qnames.

--Lars M.




From tim_one at email.msn.com  Wed Sep 27 10:45:57 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 04:45:57 -0400
Subject: [Python-Dev] 2.0b2 is ... released?
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>

The other guys are sleeping and I'm on vacation.  It *appears* that our West
Coast webmasters may have finished doing their thing, so pending Jeremy's
official announcement perhaps you'd just like to check it out:

    http://www.pythonlabs.com/products/python2.0/

I can't swear it's a release.  *Looks* like one, though!





From fredrik at pythonware.com  Wed Sep 27 11:00:34 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 11:00:34 +0200
Subject: [Python-Dev] 2.0b2 is ... released?
References: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>
Message-ID: <016201c02861$66aee2d0$0900a8c0@SPIFF>


> The other guys are sleeping and I'm on vacation.  It *appears* that our
West
> Coast webmasters may have finished doing their thing, so pending Jeremy's
> official announcement perhaps you'd just like to check it out:
>
>     http://www.pythonlabs.com/products/python2.0/
>
> I can't swear it's a release.  *Looks* like one, though!

the daily URL says so too:

    http://www.pythonware.com/daily/

(but even though we removed some 2.5 megs of unicode stuff,
the new tarball is nearly as large as the previous one.  less filling,
more taste?)

</F>




From fredrik at pythonware.com  Wed Sep 27 11:08:04 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 11:08:04 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
Message-ID: <018401c02862$72311820$0900a8c0@SPIFF>

tim wrote:
> > test test_unicodedata failed -- Writing:
> > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
>
> The problem appears to be that the test uses the secret "unicode-internal"
> encoding, which is dependent upon the big/little-endianess of your
platform.

my fault -- when I saw that, I asked myself "why the heck doesn't mal
just use repr, like I did?" and decided that he used "unicode-escape"
was make to sure the test didn't break if the repr encoding changed.

too bad my brain didn't trust my eyes...

> I can reproduce your flawed hash exactly on my platform by replacing this
> line:
>
>         h.update(u''.join(data).encode('unicode-internal'))

I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
as
anything can be...)

</F>




From tim_one at email.msn.com  Wed Sep 27 11:19:03 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 05:19:03 -0400
Subject: [Python-Dev] 2.0b2 is ... released?
In-Reply-To: <016201c02861$66aee2d0$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFLHIAA.tim_one@email.msn.com>

>> The other guys are sleeping and I'm on vacation.  It *appears* that our
>> West Coast webmasters may have finished doing their thing, so
>> pending Jeremy's official announcement perhaps you'd just like to
>> check it out:
>>
>>     http://www.pythonlabs.com/products/python2.0/
>>
>> I can't swear it's a release.  *Looks* like one, though!

[/F]
> the daily URL says so too:
>
>     http://www.pythonware.com/daily/

Thanks, /F!  I'll *believe* it's a release if I can ever complete
downloading the Windows installer from that site.  S-l-o-w!

> (but even though we removed some 2.5 megs of unicode stuff,
> the new tarball is nearly as large as the previous one.  less filling,
> more taste?)

Heh, I expected *that* one:  the fact that the Unicode stuff was highly
compressible wasn't lost on gzip either.  The Windows installer shrunk less
than 10%, and that includes savings also due to (a) not shipping two full
copies of Lib/ anymore (looked like an ancient stray duplicate line in the
installer script), and (b) not shipping the debug .lib files anymore.
There's a much nicer savings after it's all unpacked, of course.

Hey!  Everyone check out the "what's new in 2.0b2" section!  This was an
incredible amount of good work in a 3-week period, and you should all be
proud of yourselves.  And *especially* proud if you actually helped <wink>.

if-you-just-got-in-the-way-we-love-you-too-ly y'rs  - tim





From mal at lemburg.com  Wed Sep 27 14:13:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 14:13:01 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <018401c02862$72311820$0900a8c0@SPIFF>
Message-ID: <39D1E44D.C7E080D@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > test test_unicodedata failed -- Writing:
> > > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
> >
> > The problem appears to be that the test uses the secret "unicode-internal"
> > encoding, which is dependent upon the big/little-endianess of your
> platform.
> 
> my fault -- when I saw that, I asked myself "why the heck doesn't mal
> just use repr, like I did?" and decided that he used "unicode-escape"
> was make to sure the test didn't break if the repr encoding changed.
> 
> too bad my brain didn't trust my eyes...

repr() would have been a bad choice since the past has shown
that repr() does change. I completely forgot about the endianness
which affects the hash value.
 
> > I can reproduce your flawed hash exactly on my platform by replacing this
> > line:
> >
> >         h.update(u''.join(data).encode('unicode-internal'))
> 
> I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
> as
> anything can be...)

I think UTF-8 will bring about problems with surrogates (that's
why I used the unicode-internal codec). I haven't checked this
though... I'll fix this ASAP.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Wed Sep 27 14:19:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 27 Sep 2000 14:19:42 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 05:50:16PM -0400
References: <14800.62087.617722.272109@bitdiddle.concentric.net> <14800.64622.961057.204969@cj42289-a.reston1.va.home.com> <14801.383.799094.8428@cj42289-a.reston1.va.home.com> <20000926141610.A6557@keymaster.enme.ucalgary.ca> <14801.6680.507173.995404@cj42289-a.reston1.va.home.com>
Message-ID: <20000927141942.M20757@xs4all.nl>

On Tue, Sep 26, 2000 at 05:50:16PM -0400, Fred L. Drake, Jr. wrote:

[ test_fcntl, test_pty and test_openpty failing on SuSe & Caldera Linux ]

>   Now, it may be that something strange is going on since these are
> the "virtual environments" on SourceForge.  I'm not sure these are
> really the same thing as running those systems.  I'm looking at the
> script to start SuSE; there's nothing really there but a chroot call;
> perhaps there's a kernel/library mismatch?

Nope, you almost got it. You were so close, too! It's not a kernel/library
thing, it's the chroot call ;) I'm *guessing* here, but it looks like you
get a faked privileged shell in a chrooted environment, which isn't actualy
privileged (kind of like the FreeBSD 'jail' thing.) It doesn't suprise me
one bit that it fails on those three tests. In fact, I'm (delightedly)
suprised that it didn't fail more tests! But these three require some
close interaction between the kernel, the libc, and the filesystem (instead
of just kernel/fs, libc/fs or kernel/libc.)

It could be anything: security-checks on owner/mode in the kernel,
security-checks on same in libc, or perhaps something sees the chroot and
decides that deception is not going to work in this case. If Sourceforge is
serious about this virtual environment service they probably do want to know
about this, though. I'll see if I can get my SuSe-loving colleague to
compile&test Python on his box, and if that works alright, I think we can
safely claim this is a Sourceforge bug, not a Python one. I don't know
anyone using Caldera, though.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Wed Sep 27 14:20:30 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 14:20:30 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <018401c02862$72311820$0900a8c0@SPIFF> <39D1E44D.C7E080D@lemburg.com>
Message-ID: <39D1E60E.95E04302@lemburg.com>

"M.-A. Lemburg" wrote:
> 
> Fredrik Lundh wrote:
> >
> > tim wrote:
> > > > test test_unicodedata failed -- Writing:
> > > > 'e052289ecef97fc89c794cf663cb74a64631d34e', expected:
> > > > 'b88684df19fca8c3d0ab31f040dd8de89f7836fe'
> > >
> > > The problem appears to be that the test uses the secret "unicode-internal"
> > > encoding, which is dependent upon the big/little-endianess of your
> > platform.
> >
> > my fault -- when I saw that, I asked myself "why the heck doesn't mal
> > just use repr, like I did?" and decided that he used "unicode-escape"
> > was make to sure the test didn't break if the repr encoding changed.
> >
> > too bad my brain didn't trust my eyes...
> 
> repr() would have been a bad choice since the past has shown
> that repr() does change. I completely forgot about the endianness
> which affects the hash value.
> 
> > > I can reproduce your flawed hash exactly on my platform by replacing this
> > > line:
> > >
> > >         h.update(u''.join(data).encode('unicode-internal'))
> >
> > I suggest replacing "unicode-internal" with "utf-8" (which is as canonical
> > as
> > anything can be...)
> 
> I think UTF-8 will bring about problems with surrogates (that's
> why I used the unicode-internal codec). I haven't checked this
> though... I'll fix this ASAP.

UTF-8 works for me. I'll check in a patch.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Wed Sep 27 15:22:56 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 27 Sep 2000 09:22:56 -0400 (EDT)
Subject: [Python-Dev] 2.0b2 is ... released?
In-Reply-To: <016201c02861$66aee2d0$0900a8c0@SPIFF>
References: <LNBBLJKPBEHFEDALKOLCIEFIHIAA.tim_one@email.msn.com>
	<016201c02861$66aee2d0$0900a8c0@SPIFF>
Message-ID: <14801.62640.276852.209527@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > (but even though we removed some 2.5 megs of unicode stuff,
 > the new tarball is nearly as large as the previous one.  less filling,
 > more taste?)

  Umm... Zesty!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Wed Sep 27 18:04:36 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 12:04:36 -0400 (EDT)
Subject: [Python-Dev] Python 2.0b2 is released!
Message-ID: <14802.6804.717866.176697@bitdiddle.concentric.net>

Python 2.0b2 is released.  The BeOpen PythonLabs and our cast of
SourceForge volunteers have fixed many bugs since the 2.0b1 release
three weeks ago.  Please go here to pick up the new release:

    http://www.pythonlabs.com/tech/python2.0/

There's a tarball, a Windows installer, RedHat RPMs, online
documentation, and a long list of fixed bugs.

The final release of Python 2.0 is expected in early- to mid-October.
We would appreciate feedback on the current beta release in order to
fix any remaining bugs before the final release.  Confirmation of
build and test success on less common platforms is also helpful.

Python 2.0 has many new features, including the following:

  - Augmented assignment, e.g. x += 1
  - List comprehensions, e.g. [x**2 for x in range(10)]
  - Extended import statement, e.g. import Module as Name
  - Extended print statement, e.g. print >> file, "Hello"
  - Optional collection of cyclical garbage

This release fixes many known bugs.  The list of open bugs has dropped
to 50, and more than 100 bug reports have been resolved since Python
1.6.  To report a new bug, use the SourceForge bug tracker
http://sourceforge.net/bugs/?func=addbug&group_id=5470

-- Jeremy Hylton <http://www.python.org/~jeremy/>




From jeremy at beopen.com  Wed Sep 27 18:31:35 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 12:31:35 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0b2 is released!
In-Reply-To: <14802.6804.717866.176697@bitdiddle.concentric.net>
References: <14802.6804.717866.176697@bitdiddle.concentric.net>
Message-ID: <14802.8423.701972.950382@bitdiddle.concentric.net>

The correct URL for the Python 2.0b2 release is:
    http://www.pythonlabs.com/products/python2.0/

-- Jeremy Hylton <http://www.python.org/~jeremy/>



From tommy at ilm.com  Wed Sep 27 19:26:53 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 10:26:53 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
References: <14801.10496.986326.537462@mace.lucasdigital.com>
	<LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com>
Message-ID: <14802.11605.281385.45283@mace.lucasdigital.com>

Tim Peters writes:
| [Victor the Cleaner]
| > Jeremy asked me to send this report (which I originally sent just to
| > him) along to the rest of python-dev, so here ya go:
| 
| Bugs reports should go to SourceForge, else as often as not they'll got
| lost.

Sorry, this wasn't intended to be bug report (not yet, at least).
Jeremy asked for feedback on the release, and that's all I was trying
to give. 


| When you do this from a shell:
| 
| >>> u"A".encode("unicode-internal")
| 'A\000'
| >>>
| 
| I bet you get
| 
| '\000A'
| 
| Right?

Right, as usual. :)  Sounds like MAL already has this one fixed,
too... 



From martin at loewis.home.cs.tu-berlin.de  Wed Sep 27 20:36:04 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 27 Sep 2000 20:36:04 +0200
Subject: [XML-SIG] Re: [Python-Dev] Bogus SAX test case
In-Reply-To: <m3hf72uubm.fsf@lambda.garshol.priv.no> (message from Lars Marius
	Garshol on 27 Sep 2000 10:12:45 +0200)
References: <200009262034.WAA09761@loewis.home.cs.tu-berlin.de> <m3hf72uubm.fsf@lambda.garshol.priv.no>
Message-ID: <200009271836.UAA00872@loewis.home.cs.tu-berlin.de>

> | My proposal would be to correct the test case to pass "ns1:doc" as
> | the qname, 
> 
> I see that as being the best fix, and have now committed it.

Thanks!

> | and to correct the generator to output the qname if that was
> | provided by the reader.
> 
> We could do that, but the namespace name and the qname are supposed to
> be equivalent in any case, so I don't see any reason to change it.

What about

<foo xmlns:mine="martin:von.loewis">
  <bar xmlns:meiner="martin:von.loewis">
    <mine:foobar/>
    <meiner:foobar/>
  </bar>
</foo>

In that case, one of the qnames will change on output when your
algorithm is used - even if the parser provided the original names. By
the way, when parsing this text via

import xml.sax,xml.sax.handler,xml.sax.saxutils,StringIO
p=xml.sax.make_parser()
p.setContentHandler(xml.sax.saxutils.XMLGenerator())
p.setFeature(xml.sax.handler.feature_namespaces,1)
i=xml.sax.InputSource()
i.setByteStream(StringIO.StringIO("""<foo xmlns:mine="martin:von.loewis"><bar xmlns:meiner="martin:von.loewis"><mine:foobar/><meiner:foobar/></bar></foo>"""))
p.parse(i)
print

I get a number of interesting failures. Would you mind looking into
that?

On a related note, it seems that "<xml:hello/>" won't unparse
properly, either...

Regards,
Martin



From mal at lemburg.com  Wed Sep 27 20:53:24 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 20:53:24 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <14801.10496.986326.537462@mace.lucasdigital.com>
		<LNBBLJKPBEHFEDALKOLCGEEGHIAA.tim_one@email.msn.com> <14802.11605.281385.45283@mace.lucasdigital.com>
Message-ID: <39D24224.EAF1E144@lemburg.com>

Victor the Cleaner wrote:
> 
> Tim Peters writes:
> | [Victor the Cleaner]
> | > Jeremy asked me to send this report (which I originally sent just to
> | > him) along to the rest of python-dev, so here ya go:
> |
> | Bugs reports should go to SourceForge, else as often as not they'll got
> | lost.
> 
> Sorry, this wasn't intended to be bug report (not yet, at least).
> Jeremy asked for feedback on the release, and that's all I was trying
> to give.
> 
> | When you do this from a shell:
> |
> | >>> u"A".encode("unicode-internal")
> | 'A\000'
> | >>>
> |
> | I bet you get
> |
> | '\000A'
> |
> | Right?
> 
> Right, as usual. :)  Sounds like MAL already has this one fixed,
> too...

It is fixed in CVS ... don't know if the patch made it into
the release though. The new test now uses UTF-8 as encoding
which is endian-independent.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Wed Sep 27 21:25:54 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 15:25:54 -0400
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D24224.EAF1E144@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>

[Victor the Cleaner]
> Sorry, this wasn't intended to be bug report (not yet, at least).
> Jeremy asked for feedback on the release, and that's all I was trying
> to give.

Tommy B, is that you, hiding behind a Victor mask?  Cool!  I was really
directing my rancor at Jeremy <wink>:  by the time he fwd'ed the msg here,
it was already too late to change the release, so it had already switched
from "feedback" to "bug".

[MAL]
> It is fixed in CVS ... don't know if the patch made it into
> the release though. The new test now uses UTF-8 as encoding
> which is endian-independent.

Alas, it was not in the release.  I didn't even know about it until after
the installers were all built and shipped.  Score another for last-second
improvements <0.5 wink>.

Very, very weird:  we all know that SHA is believed to be cryptologically
secure, so there was no feasible way to deduce why the hashes were
different.  But I was coming down with a fever at the time (now in full
bloom, alas), and just stared at the two hashes:

    good:  b88684df19fca8c3d0ab31f040dd8de89f7836fe
    bad:   e052289ecef97fc89c794cf663cb74a64631d34e

Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
is for 'endian'".  Else I wouldn't have had a clue!

should-get-sick-more-often-i-guess-ly y'rs  - tim





From mal at lemburg.com  Wed Sep 27 21:38:13 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 27 Sep 2000 21:38:13 +0200
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
Message-ID: <39D24CA5.7F914B7E@lemburg.com>

[Tim Peters wrote about the test_unicodedata.py glitch]:
> 
> [MAL]
> > It is fixed in CVS ... don't know if the patch made it into
> > the release though. The new test now uses UTF-8 as encoding
> > which is endian-independent.
> 
> Alas, it was not in the release.  I didn't even know about it until after
> the installers were all built and shipped.  Score another for last-second
> improvements <0.5 wink>.

You're right. This shouldn't have been applied so close to the
release date/time. Looks like all reviewers work on little
endian machines...
 
> Very, very weird:  we all know that SHA is believed to be cryptologically
> secure, so there was no feasible way to deduce why the hashes were
> different. But I was coming down with a fever at the time (now in full
> bloom, alas), and just stared at the two hashes:
> 
>     good:  b88684df19fca8c3d0ab31f040dd8de89f7836fe
>     bad:   e052289ecef97fc89c794cf663cb74a64631d34e
> 
> Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
> fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
> is for 'endian'".  Else I wouldn't have had a clue!

Well, let's think of it as a hidden feature: the test fails
if and only if it is run on a big endian machine... should
have named the test to something more obvious, e.g.
test_littleendian.py ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Wed Sep 27 21:59:52 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 15:59:52 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D24CA5.7F914B7E@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
Message-ID: <14802.20920.420649.929910@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:

  MAL> [Tim Peters wrote about the test_unicodedata.py glitch]:
  >>
  >> [MAL]
  >> > It is fixed in CVS ... don't know if the patch made it into the
  >> > release though. The new test now uses UTF-8 as encoding which
  >> > is endian-independent.
  >>
  >> Alas, it was not in the release.  I didn't even know about it
  >> until after the installers were all built and shipped.  Score
  >> another for last-second improvements <0.5 wink>.

  MAL> You're right. This shouldn't have been applied so close to the
  MAL> release date/time. Looks like all reviewers work on little
  MAL> endian machines...
 
Yes.  I was bit reckless; test_unicodedata and the latest distutils
checkins had been made in following the official code freeze and were
not being added to fix a showstopper bug.  I should have deferred
them.

We'll have to be a lot more careful about the 2.0 final release.  PEP
200 has a tenative ship date of Oct. 10.  We should probably have a
code freeze on Oct. 6 and leave the weekend and Monday for verifying
that there are no build problems on little- and big-endian platforms.

Jeremy



From skip at mojam.com  Wed Sep 27 22:15:23 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 27 Sep 2000 15:15:23 -0500 (CDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.20920.420649.929910@bitdiddle.concentric.net>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
Message-ID: <14802.21851.446506.215291@beluga.mojam.com>

    Jeremy> We'll have to be a lot more careful about the 2.0 final release.
    Jeremy> PEP 200 has a tenative ship date of Oct. 10.  We should probably
    Jeremy> have a code freeze on Oct. 6 and leave the weekend and Monday
    Jeremy> for verifying that there are no build problems on little- and
    Jeremy> big-endian platforms.

Since you can't test on all platforms, if you fix platform-specific bugs
bettween now and final release, I suggest you make bundles (tar, Windows
installer, whatever) available (without need for CVS) and specifically ask
the people who reported those bugs to check things out using the appropriate
bundle(s).  This is as opposed to making such stuff available and then
posting a general note to the various mailing lists asking people to try
things out.  I think if you're more direct with people who have
"interesting" platforms, you will improve the chances of wringing out a few
more bugs before the actual release.

Skip




From jeremy at beopen.com  Wed Sep 27 23:10:21 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 17:10:21 -0400 (EDT)
Subject: [Python-Dev] buffer overlow in PC/getpathp.c
Message-ID: <14802.25149.170239.848119@bitdiddle.concentric.net>

Mark,

Would you have some time to review PC/getpathp.c for buffer overflow
vulnerabilities?  I just fixed several problems in Modules/getpath.c
that were caused by assuming that certain environment variables and
argv[0] would contain strings less than MAXPATHLEN bytes long.  I
assume the Windows version of the code could have the same
vulnerabilities.  

Jeremy

PS Is there some other Windows expert who could check into this?



From effbot at telia.com  Wed Sep 27 23:41:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 23:41:45 +0200
Subject: [Python-Dev] stupid floating point question...
Message-ID: <001e01c028cb$bd20f620$766940d5@hagrid>

each unicode character has an optional "numeric value",
which may be a fractional value.

the unicodedata module provides a "numeric" function,
which returns a Python float representing this fraction.
this is currently implemented by a large switch stmnt,
containing entries like:

    case 0x2159:
        return (double) 1 / 6;

if I replace the numbers here with integer variables (read
from the character type table) and return the result to
Python, will str(result) be the same thing as before for all
reasonable values?

</F>




From tim_one at email.msn.com  Wed Sep 27 23:39:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 17:39:21 -0400
Subject: [Python-Dev] stupid floating point question...
In-Reply-To: <001e01c028cb$bd20f620$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com>

Try again?  I have no idea what you're asking.  Obviously, str(i) won't look
anything like str(1./6) for any integer i, so *that's* not what you're
asking.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Fredrik Lundh
> Sent: Wednesday, September 27, 2000 5:42 PM
> To: python-dev at python.org
> Subject: [Python-Dev] stupid floating point question...
>
>
> each unicode character has an optional "numeric value",
> which may be a fractional value.
>
> the unicodedata module provides a "numeric" function,
> which returns a Python float representing this fraction.
> this is currently implemented by a large switch stmnt,
> containing entries like:
>
>     case 0x2159:
>         return (double) 1 / 6;
>
> if I replace the numbers here with integer variables (read
> from the character type table) and return the result to
> Python, will str(result) be the same thing as before for all
> reasonable values?
>
> </F>





From effbot at telia.com  Wed Sep 27 23:59:48 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 27 Sep 2000 23:59:48 +0200
Subject: [Python-Dev] stupid floating point question...
References: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com>
Message-ID: <005b01c028ce$4234bb60$766940d5@hagrid>

> Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> look anything like str(1./6) for any integer i, so *that's* not what you're
> asking.

consider this code:

        PyObject* myfunc1(void) {
            return PyFloat_FromDouble((double) A / B);
        }

(where A and B are constants (#defines, or spelled out))

and this code:

        PyObject* myfunc2(int a, int b) {
            return PyFloat_FromDouble((double) a / b);
        }

if I call the latter with a=A and b=B, and pass the resulting
Python float through "str", will I get the same result on all
ANSI-compatible platforms?

(in the first case, the compiler will most likely do the casting
and the division for me, while in the latter case, it's done at
runtime)

</F>




From tommy at ilm.com  Wed Sep 27 23:48:50 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 14:48:50 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.21851.446506.215291@beluga.mojam.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
	<14802.21851.446506.215291@beluga.mojam.com>
Message-ID: <14802.27432.535375.758974@mace.lucasdigital.com>

I'll be happy to test IRIX again when the time comes...

Skip Montanaro writes:
| 
|     Jeremy> We'll have to be a lot more careful about the 2.0 final release.
|     Jeremy> PEP 200 has a tenative ship date of Oct. 10.  We should probably
|     Jeremy> have a code freeze on Oct. 6 and leave the weekend and Monday
|     Jeremy> for verifying that there are no build problems on little- and
|     Jeremy> big-endian platforms.
| 
| Since you can't test on all platforms, if you fix platform-specific bugs
| bettween now and final release, I suggest you make bundles (tar, Windows
| installer, whatever) available (without need for CVS) and specifically ask
| the people who reported those bugs to check things out using the appropriate
| bundle(s).  This is as opposed to making such stuff available and then
| posting a general note to the various mailing lists asking people to try
| things out.  I think if you're more direct with people who have
| "interesting" platforms, you will improve the chances of wringing out a few
| more bugs before the actual release.
| 
| Skip
| 
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://www.python.org/mailman/listinfo/python-dev



From tommy at ilm.com  Wed Sep 27 23:51:23 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Wed, 27 Sep 2000 14:51:23 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
References: <39D24224.EAF1E144@lemburg.com>
	<LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
Message-ID: <14802.27466.120918.480152@mace.lucasdigital.com>

Tim Peters writes:
| [Victor the Cleaner]
| > Sorry, this wasn't intended to be bug report (not yet, at least).
| > Jeremy asked for feedback on the release, and that's all I was trying
| > to give.
| 
| Tommy B, is that you, hiding behind a Victor mask?  Cool!  I was really
| directing my rancor at Jeremy <wink>:  by the time he fwd'ed the msg here,
| it was already too late to change the release, so it had already switched
| from "feedback" to "bug".

Yup, it's me.  I've been leary of posting from my work address for a
long time, but Ping seemed to be getting away with it so I figured
"what the hell" ;)

| 
| Do you see the pattern?  Ha!  I did!  They both end with "e", and in my
| fuzzy-headed state I immediately latched on to that and thought "hmm ... 'e'
| is for 'endian'".  Else I wouldn't have had a clue!

I thought maybe 'e' was for 'eeeeeew' when you realized this was IRIX ;)

| 
| should-get-sick-more-often-i-guess-ly y'rs  - tim

Or just stay sick.  That's what I do...



From tim_one at email.msn.com  Thu Sep 28 00:08:50 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 18:08:50 -0400
Subject: [Python-Dev] stupid floating point question...
In-Reply-To: <005b01c028ce$4234bb60$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEIIHIAA.tim_one@email.msn.com>

Ah!  I wouldn't worry about this -- go right ahead.  Not only the str()'s,
but even the repr()'s, are very likely to be identical.

A *good* compiler won't collapse *any* fp expressions at compile-time,
because doing so can change the 754 semantics at runtime (for example, the
evaluation of 1./6 triggers the 754 "inexact" signal, and the compiler has
no way to know whether the user is expecting that to happen at runtime, so a
good compiler will leave it alone ... at KSR, I munged our C compiler to
*try* collapsing at compile-time, capturing the 754 state before and
comparing it to the 754 state after, doing that again for each possible
rounding mode, and leaving the runtime code in if and only if any evaluation
changed any state; but, that was a *damned* good compiler <wink>).

> -----Original Message-----
> From: Fredrik Lundh [mailto:effbot at telia.com]
> Sent: Wednesday, September 27, 2000 6:00 PM
> To: Tim Peters; python-dev at python.org
> Subject: Re: [Python-Dev] stupid floating point question...
>
>
> > Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> > look anything like str(1./6) for any integer i, so *that's* not
> > what you're asking.
>
> consider this code:
>
>         PyObject* myfunc1(void) {
>             return PyFloat_FromDouble((double) A / B);
>         }
>
> (where A and B are constants (#defines, or spelled out))
>
> and this code:
>
>         PyObject* myfunc2(int a, int b) {
>             return PyFloat_FromDouble((double) a / b);
>         }
>
> if I call the latter with a=A and b=B, and pass the resulting
> Python float through "str", will I get the same result on all
> ANSI-compatible platforms?
>
> (in the first case, the compiler will most likely do the casting
> and the division for me, while in the latter case, it's done at
> runtime)





From mal at lemburg.com  Thu Sep 28 00:08:42 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 28 Sep 2000 00:08:42 +0200
Subject: [Python-Dev] stupid floating point question...
References: <LNBBLJKPBEHFEDALKOLCIEIEHIAA.tim_one@email.msn.com> <005b01c028ce$4234bb60$766940d5@hagrid>
Message-ID: <39D26FEA.E17675AA@lemburg.com>

Fredrik Lundh wrote:
> 
> > Try again?  I have no idea what you're asking.  Obviously, str(i) won't
> > look anything like str(1./6) for any integer i, so *that's* not what you're
> > asking.
> 
> consider this code:
> 
>         PyObject* myfunc1(void) {
>             return PyFloat_FromDouble((double) A / B);
>         }
> 
> (where A and B are constants (#defines, or spelled out))
> 
> and this code:
> 
>         PyObject* myfunc2(int a, int b) {
>             return PyFloat_FromDouble((double) a / b);
>         }
> 
> if I call the latter with a=A and b=B, and pass the resulting
> Python float through "str", will I get the same result on all
> ANSI-compatible platforms?
> 
> (in the first case, the compiler will most likely do the casting
> and the division for me, while in the latter case, it's done at
> runtime)

Casts have a higher precedence than e.g. /, so (double)a/b will
be compiled as ((double)a)/b.

If you'd rather play safe, just add the extra parenthesis.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From m.favas at per.dem.csiro.au  Thu Sep 28 00:08:01 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 28 Sep 2000 06:08:01 +0800
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
Message-ID: <39D26FC1.B8214C80@per.dem.csiro.au>

Jeremy writes...
We'll have to be a lot more careful about the 2.0 final release.  PEP
200 has a tenative ship date of Oct. 10.  We should probably have a
code freeze on Oct. 6 and leave the weekend and Monday for verifying
that there are no build problems on little- and big-endian platforms.

... and 64-bit platforms (or those where sizeof(long) != sizeof(int) !=
4) <grin> - a change yesterday to md5.h caused a compilation failure.
Logged as 
http://sourceforge.net/bugs/?func=detailbug&bug_id=115506&group_id=5470

-- 
Mark Favas  -   m.favas at per.dem.csiro.au
CSIRO, Private Bag No 5, Wembley, Western Australia 6913, AUSTRALIA



From tim_one at email.msn.com  Thu Sep 28 00:40:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 27 Sep 2000 18:40:10 -0400
Subject: [Python-Dev] Python 2.0b2 note for Windows developers
Message-ID: <LNBBLJKPBEHFEDALKOLCCEILHIAA.tim_one@email.msn.com>

Since most Python users on Windows don't have any use for them, I trimmed
the Python 2.0b2 installer by leaving out the debug-build .lib, .pyd, .exe
and .dll files.  If you want them, they're available in a separate zip
archive; read the Windows Users notes at

http://www.pythonlabs.com/products/python2.0/download_python2.0b2.html

for info and a download link.  If you don't already know *why* you might
want them, trust me:  you don't want them <wink>.

they-don't-even-make-good-paperweights-ly y'rs  - tim





From jeremy at beopen.com  Thu Sep 28 04:55:57 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 27 Sep 2000 22:55:57 -0400
Subject: [Python-Dev] RE: buffer overlow in PC/getpathp.c
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEFFDLAA.MarkH@ActiveState.com>
Message-ID: <AJEAKILOCCJMDILAPGJNOEOICBAA.jeremy@beopen.com>

>I would be happy to!  Although I am happy to report that I believe it
>safe - I have been very careful of this from the time I wrote it.
>
>What is the process?  How formal should it be?

Not sure how formal it should be, but I would recommend you review uses of
strcpy and convince yourself that the source string is never longer than the
target buffer.  I am not convinced.  For example, in calculate_path(), char
*pythonhome is initialized from an environment variable and thus has unknown
length.  Later it used in a strcpy(prefix, pythonhome), where prefix has a
fixed length.  This looks like a vulnerability than could be closed by using
strncpy(prefix, pythonhome, MAXPATHLEN).

The Unix version of this code had three or four vulnerabilities of this
sort.  So I imagine the Windows version has those too.  I was imagining that
the registry offered a whole new opportunity to provide unexpectedly long
strings that could overflow buffers.

Jeremy





From MarkH at ActiveState.com  Thu Sep 28 04:53:08 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 28 Sep 2000 13:53:08 +1100
Subject: [Python-Dev] RE: buffer overlow in PC/getpathp.c
In-Reply-To: <AJEAKILOCCJMDILAPGJNOEOICBAA.jeremy@beopen.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEGADLAA.MarkH@ActiveState.com>

> target buffer.  I am not convinced.  For example, in
> calculate_path(), char
> *pythonhome is initialized from an environment variable and thus

Oh - ok - sorry.  I was speaking from memory.  From memory, I believe you
will find the registry functions safe - but likely not the older
environment based stuff, I agree.

I will be happy to look into this.

Mark.




From fdrake at beopen.com  Thu Sep 28 04:57:46 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 27 Sep 2000 22:57:46 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <39D26FC1.B8214C80@per.dem.csiro.au>
References: <39D26FC1.B8214C80@per.dem.csiro.au>
Message-ID: <14802.45994.485874.454963@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > We'll have to be a lot more careful about the 2.0 final release.  PEP
 > 200 has a tenative ship date of Oct. 10.  We should probably have a
 > code freeze on Oct. 6 and leave the weekend and Monday for verifying
 > that there are no build problems on little- and big-endian platforms.

  And hopefully we'll have a SPARC machine available before then, but
the timeframe is uncertain.

Mark Favas writes:
 > ... and 64-bit platforms (or those where sizeof(long) != sizeof(int) !=
 > 4) <grin> - a change yesterday to md5.h caused a compilation failure.

  I just checked in a patch based on Tim's comment on this; please
test this on your machine if you can.  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From dkwolfe at pacbell.net  Thu Sep 28 17:08:52 2000
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Thu, 28 Sep 2000 08:08:52 -0700
Subject: [Python-Dev] FW: regarding the Python Developer posting...
Message-ID: <0G1L00JDRRD23W@mta6.snfc21.pbi.net>

>> [Seg faults in test_sre.py while testing limits]
>> 
>you could try adding a Mac OS clause to the recursion limit stuff
>in Modules/_sre.c:
>
>#if !defined(USE_STACKCHECK)
>#if defined(...whatever's needed to detect Max OS X...)
>#define USE_RECURSION_LIMIT 5000
>#elif defined(MS_WIN64) || defined(__LP64__) || defined(_LP64)
>/* require smaller recursion limit for a number of 64-bit platforms:
>   Win64 (MS_WIN64), Linux64 (__LP64__), Monterey (64-bit AIX) (_LP64) */
>/* FIXME: maybe the limit should be 40000 / sizeof(void*) ? */
>#define USE_RECURSION_LIMIT 7500
>#else
>#define USE_RECURSION_LIMIT 10000
>#endif
>#endif
>
>replace "...whatever...", and try larger values than 5000 (or smaller,
>if necessary.  10000 is clearly too large for your platform).
>
>(alternatively, you can increase the stack size.  maybe it's very small
>by default?)

Hi /F,

I spotted the USE_STACKCHECK, got curious, and went hunting for it... of 
course curiousity kills the cat... it's time to got to work now.... 
meaning that the large number of replies, counter-replies, code and 
follow up that I'm going to need to wade thru is going to have to wait.

Why you ask, well when you strip Mac OS X down to the core... it's unix 
based and therefore has the getrusage command... which means that I need 
to take a look at some of the patches - 
<http://sourceforge.net/patch/download.php?id=101352>

In the Public Beta the stack size is currently set to 512K by default... 
which is usually enough for most processes... but not sre...

I-should-have-stayed-up-all-night'ly yours,

- Dan



From loewis at informatik.hu-berlin.de  Thu Sep 28 17:37:10 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Thu, 28 Sep 2000 17:37:10 +0200 (MET DST)
Subject: [Python-Dev] stupid floating point question...
Message-ID: <200009281537.RAA21436@pandora.informatik.hu-berlin.de>

> A *good* compiler won't collapse *any* fp expressions at
> compile-time, because doing so can change the 754 semantics at
> runtime (for example, the evaluation of 1./6 triggers the 754
> "inexact" signal, and the compiler has no way to know whether the
> user is expecting that to happen at runtime, so a good compiler will
> leave it alone

Of course, that doesn't say anything about what *most* compilers do.
For example, gcc, on i586-pc-linux-gnu, compiles

double foo(){
	return (double)1/6;
}

into

.LC0:
	.long 0x55555555,0x3fc55555
.text
	.align 4
.globl foo
	.type	 foo, at function
foo:
	fldl .LC0
	ret

when compiling with -fomit-frame-pointer -O2. That still doesn't say
anything about what most compilers do - if there is interest, we could
perform a comparative study on the subject :-)

The "would break 754" argument is pretty weak, IMO - gcc, for example,
doesn't claim to comply to that standard.

Regards,
Martin




From jeremy at beopen.com  Thu Sep 28 18:58:48 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 28 Sep 2000 12:58:48 -0400 (EDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14802.21851.446506.215291@beluga.mojam.com>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
	<14802.21851.446506.215291@beluga.mojam.com>
Message-ID: <14803.30920.93791.816163@bitdiddle.concentric.net>

>>>>> "SM" == Skip Montanaro <skip at mojam.com> writes:

  Jeremy> We'll have to be a lot more careful about the 2.0 final
  Jeremy> release.  PEP 200 has a tenative ship date of Oct. 10.  We
  Jeremy> should probably have a code freeze on Oct. 6 and leave the
  Jeremy> weekend and Monday for verifying that there are no build
  Jeremy> problems on little- and big-endian platforms.

  SM> Since you can't test on all platforms, if you fix
  SM> platform-specific bugs bettween now and final release, I suggest
  SM> you make bundles (tar, Windows installer, whatever) available
  SM> (without need for CVS) and specifically ask the people who
  SM> reported those bugs to check things out using the appropriate
  SM> bundle(s).

Good idea!  I've set up a cron job that will build a tarball every
night at 3am and place it on the ftp server at python.beopen.com:
    ftp://python.beopen.com/pub/python/snapshots/

I've started things off with a tar ball I built just now.
    Python-2.0b2-devel-2000-09-28.tar.gz

Tommy -- Could you use this snapshot to verify that the unicode test
is fixed?

Jeremy




From thomas.heller at ion-tof.com  Thu Sep 28 19:05:02 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Thu, 28 Sep 2000 19:05:02 +0200
Subject: [Python-Dev] Re: [Distutils] Distutils 1.0 code freeze: Oct 1
References: <20000926205312.A1470@beelzebub>
Message-ID: <02af01c0296e$40cf1b30$4500a8c0@thomasnb>

> If you know about any outstanding Distutils bugs, please tell me *now*.
> Put 'em in the SourceForge bug database if you're wondering why I
> haven't fixed them yet -- they might have gotten lost, I might not know
> about 'em, etc.  If you're not sure, put it in SourceForge.

Mike Fletcher found a another bug: Building extensions on windows
(at least with MSVC) in debug mode link with the wrong python
import library. This leads to crashes because the extension
loads the wrong python dll at runtime.

Will report this on sourceforge, although I doubt Greg will be able
to fix this...

Distutils code freeze: Greg, I have some time next week to work on
this. Do you give me permission to check it in if I find a solution?

Thomas




From martin at loewis.home.cs.tu-berlin.de  Thu Sep 28 21:32:00 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 28 Sep 2000 21:32:00 +0200
Subject: [Python-Dev] Dynamically loaded extension modules on MacOS X
Message-ID: <200009281932.VAA01999@loewis.home.cs.tu-berlin.de>

Has anybody succeeded in building extension modules for 2.0b1 on MacOS
X? On xml-sig, we had a report that the pyexpat module would not build
dynamically when building was initiated by the distutils, see the
report in

http://sourceforge.net/bugs/?func=detailbug&bug_id=115544&group_id=6473

Essentially, Python was configured with "-with-threads -with-dyld
-with-suffix=.exe", which causes extension modules to be linked as

cc -bundle -prebind {object files} -o {target}.so

With this linker line, the linker reported

/usr/bin/ld: warning -prebind has no effect with -bundle

and then

/usr/bin/ld: Undefined symbols:
_PyArg_ParseTuple
_PyArg_ParseTupleAndKeywords
...*removed a few dozen more symbols*...

So apparently the command line options are bogus for the compiler,
which identifies itself as

    Reading specs from /usr/libexec/ppc/2.95.2/specs
    Apple Computer, Inc. version cc-796.3, based on gcc driver version
     2.7.2.1 executing gcc version 2.95.2

Also, these options apparently won't cause creation of a shared
library. I wonder whether a simple "cc -shared" won't do the trick -
can a Mac expert enlighten me?

Regards,
Martin



From tommy at ilm.com  Thu Sep 28 21:38:54 2000
From: tommy at ilm.com (Victor the Cleaner)
Date: Thu, 28 Sep 2000 12:38:54 -0700 (PDT)
Subject: [Python-Dev] Python 2.0 beta 2 pre-release
In-Reply-To: <14803.30920.93791.816163@bitdiddle.concentric.net>
References: <LNBBLJKPBEHFEDALKOLCEEHIHIAA.tim_one@email.msn.com>
	<39D24CA5.7F914B7E@lemburg.com>
	<14802.20920.420649.929910@bitdiddle.concentric.net>
	<14802.21851.446506.215291@beluga.mojam.com>
	<14803.30920.93791.816163@bitdiddle.concentric.net>
Message-ID: <14803.40496.957808.858138@mace.lucasdigital.com>

Jeremy Hylton writes:
| 
| I've started things off with a tar ball I built just now.
|     Python-2.0b2-devel-2000-09-28.tar.gz
| 
| Tommy -- Could you use this snapshot to verify that the unicode test
| is fixed?


Sure thing.  I just tested it and it passed test_unicodedata.  Looks
good on this end...



From tim_one at email.msn.com  Thu Sep 28 21:59:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 15:59:55 -0400
Subject: [Python-Dev] RE: stupid floating point question...
In-Reply-To: <200009281537.RAA21436@pandora.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELFHIAA.tim_one@email.msn.com>

[Tim]
> A *good* compiler won't collapse *any* fp expressions at
> compile-time ...

[Martin von Loewis]
> Of course, that doesn't say anything about what *most* compilers do.

Doesn't matter in this case; I told /F not to worry about it having taken
that all into account.  Almost all C compilers do a piss-poor job of taking
floating-point seriously, but it doesn't really matter for the purpose /F
has in mind.

[an example of gcc precomputing the best possible result]
> 	return (double)1/6;
> ...
> 	.long 0x55555555,0x3fc55555

No problem.  If you set the HW rounding mode to +infinity during
compilation, the first chunk there would end with a 6 instead.  Would affect
the tail end of the repr(), but not the str().

> ...
> when compiling with -fomit-frame-pointer -O2. That still doesn't say
> anything about what most compilers do - if there is interest, we could
> perform a comparative study on the subject :-)

No need.

> The "would break 754" argument is pretty weak, IMO - gcc, for example,
> doesn't claim to comply to that standard.

/F's question was about fp.  754 is the only hope he has for any x-platform
consistency (C89 alone gives no hope at all, and no basis for answering his
question).  To the extent that a C compiler ignores 754, it makes x-platform
fp consistency impossible (which, btw, Python inherits from C:  we can't
even manage to get string<->float working consistently across 100%
754-conforming platforms!).  Whether that's a weak argument or not depends
entirely on how important x-platform consistency is to a given app.  In /F's
specific case, a sloppy compiler is "good enough".

i'm-the-only-compiler-writer-i-ever-met-who-understood-fp<0.5-wink>-ly
    y'rs  - tim





From effbot at telia.com  Thu Sep 28 22:40:34 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 22:40:34 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCGELFHIAA.tim_one@email.msn.com>
Message-ID: <004f01c0298c$62ba2320$766940d5@hagrid>

tim wrote:
> > Of course, that doesn't say anything about what *most* compilers do.
> 
> Doesn't matter in this case; I told /F not to worry about it having taken
> that all into account.  Almost all C compilers do a piss-poor job of taking
> floating-point seriously, but it doesn't really matter for the purpose /F
> has in mind.

to make it clear for everyone: I'm planning to get rid of the last
remaining switch statement in unicodectype.c ("numerical value"),
and replace the doubles in there with rationals.

the problem here is that MAL's new test suite uses "str" on the
return value from that function, and it would a bit annoying if we
ended up with a Unicode test that might fail on platforms with
lousy floating point support...

:::

on the other hand, I'm not sure I think it's a really good idea to
have "numeric" return a floating point value.  consider this:

>>> import unicodedata
>>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
0.33333333333333331

(the glyph looks like "1/3", and that's also what the numeric
property field in the Unicode database says)

:::

if I had access to the time machine, I'd change it to:

>>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
(1, 3)

...but maybe we can add an alternate API that returns the
*exact* fraction (as a numerator/denominator tuple)?

>>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
(1, 3)

(hopefully, someone will come up with a better name)

</F>




From ping at lfw.org  Thu Sep 28 22:35:24 2000
From: ping at lfw.org (The Ping of Death)
Date: Thu, 28 Sep 2000 15:35:24 -0500 (CDT)
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point
 question...)
In-Reply-To: <004f01c0298c$62ba2320$766940d5@hagrid>
Message-ID: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>

On Thu, 28 Sep 2000, Fredrik Lundh wrote:
> if I had access to the time machine, I'd change it to:
> 
> >>> unicodedata.numeric(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
> 
> ...but maybe we can add an alternate API that returns the
> *exact* fraction (as a numerator/denominator tuple)?
> 
> >>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
> 
> (hopefully, someone will come up with a better name)

unicodedata.rational might be an obvious choice.

    >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
    (1, 3)


-- ?!ng




From tim_one at email.msn.com  Thu Sep 28 22:52:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 16:52:28 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>

[/F]
> ...but maybe we can add an alternate API that returns the
> *exact* fraction (as a numerator/denominator tuple)?
>
> >>> unicodedata.numeric2(u"\N{VULGAR FRACTION ONE THIRD}")
> (1, 3)
>
> (hopefully, someone will come up with a better name)

[The Ping of Death]

LOL!  Great name, Ping.

> unicodedata.rational might be an obvious choice.
>
>     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
>     (1, 3)

Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
too.

leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
    ly y'ts  - the timmy of death





From thomas at xs4all.net  Thu Sep 28 22:53:30 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 28 Sep 2000 22:53:30 +0200
Subject: [Python-Dev] 2.0b2 on Slackware 7.0
In-Reply-To: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Tue, Sep 26, 2000 at 04:32:21PM -0400
References: <14801.2005.843456.598712@cj42289-a.reston1.va.home.com>
Message-ID: <20000928225330.A26568@xs4all.nl>

On Tue, Sep 26, 2000 at 04:32:21PM -0400, Fred L. Drake, Jr. wrote:

>   I just built and tested 2.0b2 on Slackware 7.0, and found that
> threads failed miserably.  I got the message:

> pthread_cond_wait: Interrupted system call

>   If anyone has any ideas, please send them along!  I'll turn this
> into a real bug report later.

I'm inclined to nudge this towards a libc bug... The exact version of glibc
Slackware 7 uses would be important, in that case. Redhat has been using
glibc 2.1.3 for a while, which seems stable, but I have no clue what
Slackware is using nowadays (I believe they were one of the last
of the major distributions to move to glibc, but I might be mistaken.) And
then there is the possibility of optimization bugs in the gcc that compiled
Python or the gcc that compiled the libc/libpthreads. 

(That last bit is easy to test though: copy the python binary from a working
linux machine with the same kernel major version & libc major version. If it
works, it's an optimization bug. If it works bug exhibits the same bug, it's
probably libc/libpthreads causing it somehow. If it fails to start
altogether, Slackware is using strange libs (and they might be the cause of
the bug, or might be just the *exposer* of the bug.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Thu Sep 28 23:14:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 23:14:45 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>
Message-ID: <00cb01c02991$23f61360$766940d5@hagrid>

tim wrote:
> leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
>     ly y'ts  - the timmy of death

oh, the unicode folks have figured that one out:

>>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character

</F>




From effbot at telia.com  Thu Sep 28 23:49:13 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 28 Sep 2000 23:49:13 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
References: <LNBBLJKPBEHFEDALKOLCEELJHIAA.tim_one@email.msn.com>
Message-ID: <002a01c02996$9b1742c0$766940d5@hagrid>

tim wrote:
> > unicodedata.rational might be an obvious choice.
> >
> >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> >     (1, 3)
> 
> Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
> too.

should I interpret this as a +1, or should I write a PEP on
this topic? ;-)

</F>




From tim_one at email.msn.com  Fri Sep 29 00:12:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:12:23 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <00cb01c02991$23f61360$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELNHIAA.tim_one@email.msn.com>

[tim]
> leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
>     ly y'ts  - the timmy of death

[/F]
> oh, the unicode folks have figured that one out:
>
> >>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character

Ya, except I'm starting to suspect they're not floating-point experts
either:

>>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character
>>> unicodedata.numeric(u"\N{EULER CONSTANT}")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ValueError: not a numeric character
>>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
UnicodeError: Unicode-Escape decoding error: Invalid Unicode Character Name
>>>





From mal at lemburg.com  Fri Sep 29 00:30:03 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:30:03 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating 
 pointquestion...)
References: <Pine.LNX.4.10.10009281534010.5685-100000@server1.lfw.org>
Message-ID: <39D3C66B.3A3350AE@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > unicodedata.rational might be an obvious choice.
> > >
> > >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> > >     (1, 3)
> >
> > Perfect -- another great name.  Beats all heck out of unicodedata.vulgar()
> > too.
> 
> should I interpret this as a +1, or should I write a PEP on
> this topic? ;-)

+1 from here. 

I really only chose floats to get all possibilities (digit, decimal
and fractions) into one type... Python should support rational numbers
some day.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Sep 29 00:32:50 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:32:50 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point question...)
In-Reply-To: <002a01c02996$9b1742c0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELNHIAA.tim_one@email.msn.com>

[The Ping of Death suggests unicodedata.rational]
>     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
>     (1, 3)

[Timmy replies]
> Perfect -- another great name.  Beats all heck out of
> unicodedata.vulgar() too.

[/F inquires]
> should I interpret this as a +1, or should I write a PEP on
> this topic? ;-)

I'm on vacation (but too ill to do much besides alternate sleep & email
<snarl>), and I'm not sure we have clear rules about how votes from
commercial Python developers count when made on their own time.  Perhaps a
meta-PEP first to resolve that issue?

Oh, all right, just speaking for myself, I'm +1 on The Ping of Death's name
suggestion provided this function is needed at all.  But not being a Unicode
Guy by nature, I have no opinion on whether the function *is* needed (I
understand how digits work in American English, and ord(ch)-ord('0') is the
limit of my experience; can't say whether even the current .numeric() is
useful for Klingons or Lawyers or whoever it is who expects to get a numeric
value out of a character for 1/2 or 1/3).





From mal at lemburg.com  Fri Sep 29 00:33:50 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:33:50 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point 
 question...)
References: <LNBBLJKPBEHFEDALKOLCCELNHIAA.tim_one@email.msn.com>
Message-ID: <39D3C74E.B1952909@lemburg.com>

Tim Peters wrote:
> 
> [tim]
> > leaving-it-up-to-/f-to-decide-what-.rational()-should-return-for-pi-
> >     ly y'ts  - the timmy of death
> 
> [/F]
> > oh, the unicode folks have figured that one out:
> >
> > >>> unicodedata.numeric(u"\N{GREEK PI SYMBOL}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> 
> Ya, except I'm starting to suspect they're not floating-point experts
> either:
> 
> >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> UnicodeError: Unicode-Escape decoding error: Invalid Unicode Character Name
> >>>

Perhaps you should submit these for Unicode 4.0 ;-)

But really, I don't suspect that anyone is going to do serious
character to number conversion on these esoteric characters. Plain
old digits will do just as they always have (or does anyone know
of ways to represent irrational numbers on PCs by other means than
an algorithm which spits out new digits every now and then ?).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 00:38:47 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 00:38:47 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point 
 question...)
References: <LNBBLJKPBEHFEDALKOLCOELNHIAA.tim_one@email.msn.com>
Message-ID: <39D3C877.BDBC52DF@lemburg.com>

Tim Peters wrote:
> 
> [The Ping of Death suggests unicodedata.rational]
> >     >>> unicodedata.rational(u"\N{VULGAR FRACTION ONE THIRD}")
> >     (1, 3)
> 
> [Timmy replies]
> > Perfect -- another great name.  Beats all heck out of
> > unicodedata.vulgar() too.
> 
> [/F inquires]
> > should I interpret this as a +1, or should I write a PEP on
> > this topic? ;-)
> 
> I'm on vacation (but too ill to do much besides alternate sleep & email
> <snarl>), and I'm not sure we have clear rules about how votes from
> commercial Python developers count when made on their own time.  Perhaps a
> meta-PEP first to resolve that issue?
> 
> Oh, all right, just speaking for myself, I'm +1 on The Ping of Death's name
> suggestion provided this function is needed at all.  But not being a Unicode
> Guy by nature, I have no opinion on whether the function *is* needed (I
> understand how digits work in American English, and ord(ch)-ord('0') is the
> limit of my experience; can't say whether even the current .numeric() is
> useful for Klingons or Lawyers or whoever it is who expects to get a numeric
> value out of a character for 1/2 or 1/3).

The reason for "numeric" being available at all is that the
UnicodeData.txt file format specifies such a field. I don't believe
anyone will make serious use of it though... e.g. 2? would parse as 22
and not evaluate to 4.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Sep 29 00:48:08 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 18:48:08 -0400
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
In-Reply-To: <39D3C74E.B1952909@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>

[Tim]
> >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ValueError: not a numeric character
> >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> UnicodeError: Unicode-Escape decoding error: Invalid Unicode
                Character Name

[MAL]
> Perhaps you should submit these for Unicode 4.0 ;-)

Note that the first two are already there; they just don't have an
associated numerical value.  The last one was a hint that I was trying to
write a frivolous msg while giving my "<wink>" key a break <wink>.

> But really, I don't suspect that anyone is going to do serious
> character to number conversion on these esoteric characters. Plain
> old digits will do just as they always have ...

Which is why I have to wonder whether there's *any* value in exposing the
numeric-value property beyond regular old digits.





From MarkH at ActiveState.com  Fri Sep 29 03:36:11 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 29 Sep 2000 12:36:11 +1100
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>

Hi all,
	I'd like some feedback on a patch assigned to me.  It is designed to
prevent Python extensions built for an earlier version of Python from
crashing the new version.

I haven't actually tested the patch, but I am sure it works as advertised
(who is db31 anyway?).

My question relates more to the "style" - the patch locates the new .pyd's
address in memory, and parses through the MS PE/COFF format, locating the
import table.  If then scans the import table looking for Pythonxx.dll, and
compares any found entries with the current version.

Quite clever - a definite plus is that is should work for all old and
future versions (of Python - dunno about Windows ;-) - but do we want this
sort of code in Python?  Is this sort of hack, however clever, going to
some back and bite us?

Second related question:  if people like it, is this feature something we
can squeeze in for 2.0?

If there are no objections to any of this, I am happy to test it and check
it in - but am not confident of doing so without some feedback.

Thanks,

Mark.




From MarkH at ActiveState.com  Fri Sep 29 03:42:01 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 29 Sep 2000 12:42:01 +1100
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEIIDLAA.MarkH@ActiveState.com>

> Hi all,
> 	I'd like some feedback on a patch assigned to me.

sorry -
http://sourceforge.net/patch/?func=detailpatch&patch_id=101676&group_id=547
0

Mark.




From tim_one at email.msn.com  Fri Sep 29 04:24:24 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 22:24:24 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMHHIAA.tim_one@email.msn.com>

This is from 2.0b2 Windows, and typical:

C:\Python20>python -v
# C:\PYTHON20\lib\site.pyc has bad magic
import site # from C:\PYTHON20\lib\site.py
# wrote C:\PYTHON20\lib\site.pyc
# C:\PYTHON20\lib\os.pyc has bad magic
import os # from C:\PYTHON20\lib\os.py
# wrote C:\PYTHON20\lib\os.pyc
import nt # builtin
# C:\PYTHON20\lib\ntpath.pyc has bad magic
import ntpath # from C:\PYTHON20\lib\ntpath.py
# wrote C:\PYTHON20\lib\ntpath.pyc
# C:\PYTHON20\lib\stat.pyc has bad magic
import stat # from C:\PYTHON20\lib\stat.py
# wrote C:\PYTHON20\lib\stat.pyc
# C:\PYTHON20\lib\string.pyc has bad magic
import string # from C:\PYTHON20\lib\string.py
# wrote C:\PYTHON20\lib\string.pyc
import strop # builtin
# C:\PYTHON20\lib\UserDict.pyc has bad magic
import UserDict # from C:\PYTHON20\lib\UserDict.py
# wrote C:\PYTHON20\lib\UserDict.pyc
Python 2.0b2 (#6, Sep 26 2000, 14:59:21) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>>

That is, .pyc's don't work at all anymore on Windows:  Python *always*
thinks they have a bad magic number.  Elsewhere?

Also noticed that test_popen2 got broken on Windows after 2.0b2, for a very
weird reason:

C:\Code\python\dist\src\PCbuild>python ../lib/test/test_popen2.py
Test popen2 module:
testing popen2...
testing popen3...
Traceback (most recent call last):
  File "../lib/test/test_popen2.py", line 64, in ?
    main()
  File "../lib/test/test_popen2.py", line 23, in main
    popen2._test()
  File "c:\code\python\dist\src\lib\popen2.py", line 188, in _test
    for inst in _active[:]:
NameError: There is no variable named '_active'

C:\Code\python\dist\src\PCbuild>

C:\Code\python\dist\src\PCbuild>python ../lib/popen2.py
testing popen2...
testing popen3...
Traceback (most recent call last):
  File "../lib/popen2.py", line 195, in ?
    _test()
  File "../lib/popen2.py", line 188, in _test
    for inst in _active[:]:
NameError: There is no variable named '_active'

C:\Code\python\dist\src\PCbuild>

Ah!  That's probably because of this clever new code:

if sys.platform[:3] == "win":
    # Some things don't make sense on non-Unix platforms.
    del Popen3, Popen4, _active, _cleanup

If I weren't on vacation, I'd check in a fix <wink>.





From fdrake at beopen.com  Fri Sep 29 04:25:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 22:25:00 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <20000927003233.C19872@ActiveState.com>
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
Message-ID: <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>

Trent Mick writes:
 > I was playing with a different SourceForge project and I screwed up my
 > CVSROOT (used Python's instead). Sorry SOrry!

  Well, you blew it.  Don't worry, we'll have you kicked off
SourceForge in no time!  ;)
  Well, maybe not.  I've submitted a support request to fix this:

http://sourceforge.net/support/?func=detailsupport&support_id=106112&group_id=1


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From m.favas at per.dem.csiro.au  Fri Sep 29 04:49:54 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 29 Sep 2000 10:49:54 +0800
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
Message-ID: <39D40352.5C511629@per.dem.csiro.au>

Tim writes:
That is, .pyc's don't work at all anymore on Windows:  Python *always*
thinks they have a bad magic number.  Elsewhere?

Just grabbed the latest from CVS - .pyc is still fine on Tru64 Unix...

Mark
-- 
Email - m.favas at per.dem.csiro.au       Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074           CSIRO Exploration &
Mining
Fax   - +61 8 9387 8642                         Private Bag No 5
                                                Wembley, Western
Australia 6913



From nhodgson at bigpond.net.au  Fri Sep 29 05:58:41 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Fri, 29 Sep 2000 13:58:41 +1000
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <045201c029c9$8f49fd10$8119fea9@neil>

[Tim]
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

   Running (in IDLE or PythonWin with a font that covers most of Unicode
like Tahoma):
import unicodedata
for c in range(0x10000):
 x=unichr(c)
 try:
    b = unicodedata.numeric(x)
    #print "numeric:", repr(x)
    try:
      a = unicodedata.digit(x)
      if a != b:
       print "bad" , repr(x)
    except:
      print "Numeric but not digit", hex(c), x.encode("utf8"), "numeric ->",
b
 except:
  pass

   Finds about 130 characters. The only ones I feel are worth worrying about
are the half, quarters and eighths (0xbc, 0xbd, 0xbe, 0x215b, 0x215c,
0x215d, 0x215e) which are commonly used for expressing the prices of stocks
and commodities in the US. This may be rarely used but it is better to have
it available than to have people coding up their own translation tables.

   The 0x302* 'Hangzhou' numerals look like they should be classified as
digits.

   Neil





From tim_one at email.msn.com  Fri Sep 29 05:27:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:27:55 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <39D40352.5C511629@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>

[Tim]
> That is, .pyc's don't work at all anymore on Windows:  Python *always*
> thinks they have a bad magic number.  Elsewhere?

[Mark Favas]
> Just grabbed the latest from CVS - .pyc is still fine on Tru64 Unix...

Good clue!  Looks like Guido broke this on Windows when adding some
"exclusive write" silliness <wink> for Unixoids.  I'll try to make time
tonight to understand it (*looks* like fdopen is too late to ask for binary
mode under Windows ...).





From tim_one at email.msn.com  Fri Sep 29 05:40:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:40:49 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>

Any Unix geek awake?  import.c has this, starting at line 640:

#if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
...
	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);

I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
the question is whether it will break Unices if it's there ...





From esr at thyrsus.com  Fri Sep 29 05:59:12 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Thu, 28 Sep 2000 23:59:12 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Sep 28, 2000 at 11:40:49PM -0400
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com> <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <20000928235912.A9339@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> Any Unix geek awake?  import.c has this, starting at line 640:
> 
> #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
> ...
> 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
> 
> I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
> O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
> the question is whether it will break Unices if it's there ...

It will.  In particular, there us no such flag on Linux.  However
the workaround is trivial:

1. Make your flagargument O_EXCL|O_CREAT|O_WRONLY|O_TRUNC|O_BINARY

2. Above it somewhere, write

#ifndef O_BINARY
#define O_BINARY	0
#endif

Quite painless.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Society in every state is a blessing, but government even in its best
state is but a necessary evil; in its worst state an intolerable one;
for when we suffer, or are exposed to the same miseries *by a
government*, which we might expect in a country *without government*,
our calamities is heightened by reflecting that we furnish the means
by which we suffer."
	-- Thomas Paine



From tim_one at email.msn.com  Fri Sep 29 05:47:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 28 Sep 2000 23:47:55 -0400
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMMHIAA.tim_one@email.msn.com>

Nevermind.  Fixed it in a way that will be safe everywhere.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Tim Peters
> Sent: Thursday, September 28, 2000 11:41 PM
> To: Mark Favas; python-dev at python.org
> Subject: RE: [Python-Dev] .pyc broken on Windows -- anywhere else?
>
>
> Any Unix geek awake?  import.c has this, starting at line 640:
>
> #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
> ...
> 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
>
> I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
> O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
> the question is whether it will break Unices if it's there ...





From fdrake at beopen.com  Fri Sep 29 05:48:49 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 23:48:49 -0400 (EDT)
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
	<LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
Message-ID: <14804.4385.22560.522921@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Any Unix geek awake?  import.c has this, starting at line 640:

  Probably quite a few!

 > #if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
 > ...
 > 	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);
 > 
 > I need to add O_BINARY to this soup to fix .pyc's under Windows.  Is
 > O_BINARY customarily defined on Unices?  I realize Unices don't *need* it,
 > the question is whether it will break Unices if it's there ...

  I think it varies substantially.  I just checked on a FreeBSD
machine in /use/include/*.h and /usr/include/*/*.h, and grep said it
wasn't there.  It is defined on my Linux box, however.
  Since O_BINARY is a no-op for Unix, you can do this:

#if defined(O_EXCL)&&defined(O_CREAT)&&defined(O_WRONLY)&&defined(O_TRUNC)
#ifndef O_BINARY
#define O_BINARY (0)
#endif
...
	fd = open(filename, O_EXCL|O_CREAT|O_WRONLY|O_TRUNC, 0666);


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Sep 29 05:51:44 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 28 Sep 2000 23:51:44 -0400 (EDT)
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
In-Reply-To: <20000928235912.A9339@thyrsus.com>
References: <LNBBLJKPBEHFEDALKOLCOEMJHIAA.tim_one@email.msn.com>
	<LNBBLJKPBEHFEDALKOLCKEMLHIAA.tim_one@email.msn.com>
	<20000928235912.A9339@thyrsus.com>
Message-ID: <14804.4560.644795.806373@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > It will.  In particular, there us no such flag on Linux.  However
 > the workaround is trivial:

  Ah, looking back at my grep output, I see that it's defined by a lot
of libraries, but not the standard headers.  It *is* defined by the
Apache API headers, kpathsea, MySQL, OpenSSL, and Qt.  And that's just
from what I have installed.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Fri Sep 29 08:06:33 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 29 Sep 2000 02:06:33 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
Message-ID: <14804.12649.504962.985774@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> I was playing with a different SourceForge project and I
    TM> screwed up my CVSROOT (used Python's instead). Sorry SOrry!

    TM> How do I undo this cleanly? I could 'cvs remove' the
    TM> README.txt file but that would still leave the top-level
    TM> 'black/' turd right? Do the SourceForge admin guys have to
    TM> manually kill the 'black' directory in the repository?

One a directory's been added, it's nearly impossible to cleanly delete
it from CVS.  If it's infected people's working directories, you're
really screwed, because even if the SF admins remove it from the
repository, it'll be a pain to clean up on the client side.

Probably best thing to do is make sure you "cvs rm" everything in the
directory and then just let "cvs up -P" remove the empty directory.
Everybody /is/ using -P (and -d) right? :)

-Barry



From effbot at telia.com  Fri Sep 29 09:01:37 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 09:01:37 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <007301c029e3$612e1960$766940d5@hagrid>

tim wrote:
> > But really, I don't suspect that anyone is going to do serious
> > character to number conversion on these esoteric characters. Plain
> > old digits will do just as they always have ...
> 
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

the unicode database has three fields dealing with the numeric
value: decimal digit value (integer), digit value (integer), and
numeric value (integer *or* rational):

    "This is a numeric field. If the character has the numeric
    property, as specified in Chapter 4 of the Unicode Standard,
    the value of that character is represented with an integer or
    rational number in this field."

here's today's proposal: let's claim that it's a bug to return a float
from "numeric", and change it to return a string instead.

(this will match "decomposition", which is also "broken" -- it really
should return a tag followed by a sequence of unicode characters).

</F>




From martin at loewis.home.cs.tu-berlin.de  Fri Sep 29 09:01:19 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 29 Sep 2000 09:01:19 +0200
Subject: [Python-Dev] Python-Dev] Patch to avoid conflict with older versions of Python.
Message-ID: <200009290701.JAA01119@loewis.home.cs.tu-berlin.de>

> but do we want this sort of code in Python?

Since I proposed a more primitive approach to solve the same problem
(which you had postponed), I'm obviously in favour of that patch.

> Is this sort of hack, however clever, going to some back and bite us?

I can't see why. The code is quite defensive: If the data structures
don't look like what it expects, it gives up and claims it can't find
the version of the python dll used by this module.

So in worst case, we get what we have now.

My only concern is that it assumes the HMODULE is an address which can
be dereferenced. If there was some MS documentation stating that this
is guaranteed in Win32, it'd be fine. If it is merely established fact
that all Win32 current implementations implement HMODULE that way, I'd
rather see a __try/__except around that - but that would only add to
the defensive style of this patch.

A hack is required since earlier versions of Python did not consider
this problem. I don't know whether python20.dll will behave reasonably
when loaded into Python 2.1 next year - was there anything done to
address the "uninitialized interpreter" problem?

> if people like it, is this feature something we can squeeze in for
> 2.0?

I think this patch will have most value if applied to 2.0. When 2.1
comes along, many people will have been bitten by this bug, and will
know to avoid it - so it won't do that much good in 2.1.

I'm not looking forward to answering all the help at python.org messages
to explain why Python can't deal with versions properly, so I'd rather
see these people get a nice exception instead of IDLE silently closing
all windows [including those with two hours of unsaved work].

Regards,
Martin

P.S db3l is David Bolen, see http://sourceforge.net/users/db3l.



From tim_one at email.msn.com  Fri Sep 29 09:32:09 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 29 Sep 2000 03:32:09 -0400
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENFHIAA.tim_one@email.msn.com>

[Mark Hammond]
> 	I'd like some feedback on a patch assigned to me.

It's assigned to you only because I'm on vacation now <wink>.

> It is designed to prevent Python extensions built for an earlier
> version of Python from crashing the new version.
>
> I haven't actually tested the patch, but I am sure it works as
> advertised (who is db31 anyway?).

It's sure odd that SF doesn't know!  It's David Bolen; see

http://www.python.org/pipermail/python-list/2000-September/119081.html

> My question relates more to the "style" - the patch locates the new
> .pyd's address in memory, and parses through the MS PE/COFF format,
> locating the import table.  If then scans the import table looking
> for Pythonxx.dll, and compares any found entries with the current
> version.
>
> Quite clever - a definite plus is that is should work for all old and
> future versions (of Python - dunno about Windows ;-) - but do we want
> this sort of code in Python?  Is this sort of hack, however clever,
> going to some back and bite us?

Guido will hate it:  his general rule is that he doesn't want code he
couldn't personally repair if needed, and this code is from Pluto (I hear
that's right next to Redmond, though, so let's not overreact either <wink>).

OTOH, Python goes to extreme lengths to prevent crashes, and my reading of
early c.l.py reports is that the 2.0 DLL incompatibility is going to cause a
lot of crashes out in the field.  People generally don't know squat about
the extension modules they're using -- or sometimes even that they *are*
using some.

> Second related question:  if people like it, is this feature something we
> can squeeze in for 2.0?

Well, it's useless if we don't.  That is, we should bite the bullet and come
up with a principled solution, even if that means extension writers have to
add a few new lines of code or be shunned from the community forever.  But
that won't happen for 2.0.

> If there are no objections to any of this, I am happy to test it and
> check it in - but am not confident of doing so without some feedback.

Guido's out of touch, but I'm on vacation, so he can't yell at me for
encouraging you on my own time.  If it works, I would check it in with the
understanding that we earnestly intend to do whatever it takes to get rid of
this code after 2.0.    It is not a long-term solution, but if it works it's
a very expedient hack.  Hacks suck for us, but letting Python blow up sucks
for users.  So long as I'm on vacation, I side with the users <0.9 wink>.

then-let's-ask-david-to-figure-out-how-to-disable-norton-antivirus-ly
    y'rs  - tim





From thomas.heller at ion-tof.com  Fri Sep 29 09:36:33 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 09:36:33 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com>
Message-ID: <007d01c029e8$00b33570$4500a8c0@thomasnb>

> Hi all,
> I'd like some feedback on a patch assigned to me.  It is designed to
> prevent Python extensions built for an earlier version of Python from
> crashing the new version.
>
> I haven't actually tested the patch, but I am sure it works as advertised
> (who is db31 anyway?).
>
> My question relates more to the "style" - the patch locates the new .pyd's
> address in memory, and parses through the MS PE/COFF format, locating the
> import table.  If then scans the import table looking for Pythonxx.dll,
and
> compares any found entries with the current version.
Shouldn't the win32 api BindImageEx be used? Then you would not have
to know about the PE/COFF format at all. You can install a callback
function which will be called with the dll-names bound.
According to my docs, BindImageEx may not be included in early versions of
Win95, but who is using that anyway?
(Well, ok, what about CE?)

>
> Quite clever - a definite plus is that is should work for all old and
> future versions (of Python - dunno about Windows ;-) - but do we want this
> sort of code in Python?  Is this sort of hack, however clever, going to
> some back and bite us?
>
> Second related question:  if people like it, is this feature something we
> can squeeze in for 2.0?
+1 from me (if I count).

>
> If there are no objections to any of this, I am happy to test it and check
> it in - but am not confident of doing so without some feedback.
>
> Thanks,
>
> Mark.

Thomas




From effbot at telia.com  Fri Sep 29 09:53:57 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 09:53:57 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com> <007d01c029e8$00b33570$4500a8c0@thomasnb>
Message-ID: <012401c029ea$6cfbc7e0$766940d5@hagrid>

> According to my docs, BindImageEx may not be included in early versions of
> Win95, but who is using that anyway?

lots of people -- the first version of our PythonWare
installer didn't run on the original Win95 release, and
we still get complaints about that.

on the other hand, it's not that hard to use BindImageEx
only if it exists...

</F>




From mal at lemburg.com  Fri Sep 29 09:54:16 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 09:54:16 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
Message-ID: <39D44AA8.926DCF04@lemburg.com>

Tim Peters wrote:
> 
> [Tim]
> > >>> unicodedata.numeric(u"\N{PLANCK CONSTANT OVER TWO PI}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> > >>> unicodedata.numeric(u"\N{EULER CONSTANT}")
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > ValueError: not a numeric character
> > >>> unicodedata.numeric(u"\N{AIRSPEED OF AFRICAN SWALLOW}")
> > UnicodeError: Unicode-Escape decoding error: Invalid Unicode
>                 Character Name
> 
> [MAL]
> > Perhaps you should submit these for Unicode 4.0 ;-)
> 
> Note that the first two are already there; they just don't have an
> associated numerical value.  The last one was a hint that I was trying to
> write a frivolous msg while giving my "<wink>" key a break <wink>.

That's what I meant: you should submit the numeric values for
the first two and opt for addition of the last.
 
> > But really, I don't suspect that anyone is going to do serious
> > character to number conversion on these esoteric characters. Plain
> > old digits will do just as they always have ...
> 
> Which is why I have to wonder whether there's *any* value in exposing the
> numeric-value property beyond regular old digits.

It is needed for Unicode 3.0 standard compliance and for whoever
wants to use this data. Since the Unicode database explicitly
contains fractions, I think adding the .rational() API would
make sense to provide a different access method to this data.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 10:01:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:01:57 +0200
Subject: [Python-Dev] .pyc broken on Windows -- anywhere else?
References: <LNBBLJKPBEHFEDALKOLCEEMHHIAA.tim_one@email.msn.com>
Message-ID: <39D44C75.110D83B6@lemburg.com>

Tim Peters wrote:
> 
> This is from 2.0b2 Windows, and typical:
> 
> C:\Python20>python -v
> # C:\PYTHON20\lib\site.pyc has bad magic
> import site # from C:\PYTHON20\lib\site.py
> # wrote C:\PYTHON20\lib\site.pyc
> # C:\PYTHON20\lib\os.pyc has bad magic
> import os # from C:\PYTHON20\lib\os.py
> # wrote C:\PYTHON20\lib\os.pyc
> import nt # builtin
> # C:\PYTHON20\lib\ntpath.pyc has bad magic
> import ntpath # from C:\PYTHON20\lib\ntpath.py
> # wrote C:\PYTHON20\lib\ntpath.pyc
> # C:\PYTHON20\lib\stat.pyc has bad magic
> import stat # from C:\PYTHON20\lib\stat.py
> # wrote C:\PYTHON20\lib\stat.pyc
> # C:\PYTHON20\lib\string.pyc has bad magic
> import string # from C:\PYTHON20\lib\string.py
> # wrote C:\PYTHON20\lib\string.pyc
> import strop # builtin
> # C:\PYTHON20\lib\UserDict.pyc has bad magic
> import UserDict # from C:\PYTHON20\lib\UserDict.py
> # wrote C:\PYTHON20\lib\UserDict.pyc
> Python 2.0b2 (#6, Sep 26 2000, 14:59:21) [MSC 32 bit (Intel)] on win32
> Type "copyright", "credits" or "license" for more information.
> >>>
> 
> That is, .pyc's don't work at all anymore on Windows:  Python *always*
> thinks they have a bad magic number.  Elsewhere?

FYI, it works just fine on Linux on i586.

--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 10:13:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:13:34 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com> <007301c029e3$612e1960$766940d5@hagrid>
Message-ID: <39D44F2E.14701980@lemburg.com>

Fredrik Lundh wrote:
> 
> tim wrote:
> > > But really, I don't suspect that anyone is going to do serious
> > > character to number conversion on these esoteric characters. Plain
> > > old digits will do just as they always have ...
> >
> > Which is why I have to wonder whether there's *any* value in exposing the
> > numeric-value property beyond regular old digits.
> 
> the unicode database has three fields dealing with the numeric
> value: decimal digit value (integer), digit value (integer), and
> numeric value (integer *or* rational):
> 
>     "This is a numeric field. If the character has the numeric
>     property, as specified in Chapter 4 of the Unicode Standard,
>     the value of that character is represented with an integer or
>     rational number in this field."
> 
> here's today's proposal: let's claim that it's a bug to return a float
> from "numeric", and change it to return a string instead.

Hmm, how about making the return format an option ?

unicodedata.numeric(char, format=('float' (default), 'string', 'fraction'))
 
> (this will match "decomposition", which is also "broken" -- it really
> should return a tag followed by a sequence of unicode characters).

Same here:

unicodedata.decomposition(char, format=('string' (default), 
                                        'tuple'))

I'd opt for making the API more customizable rather than trying
to find the one and only true return format ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas.heller at ion-tof.com  Fri Sep 29 10:48:51 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 10:48:51 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <ECEPKNMJLHAPFFJHDOJBEEIHDLAA.MarkH@ActiveState.com> <007d01c029e8$00b33570$4500a8c0@thomasnb> <012401c029ea$6cfbc7e0$766940d5@hagrid>
Message-ID: <001601c029f2$1aa72540$4500a8c0@thomasnb>

> > According to my docs, BindImageEx may not be included in early versions
of
> > Win95, but who is using that anyway?
>
> lots of people -- the first version of our PythonWare
> installer didn't run on the original Win95 release, and
> we still get complaints about that.
>

Requirements
  Windows NT/2000: Requires Windows NT 4.0 or later.
  Windows 95/98: Requires Windows 95 or later. Available as a
redistributable for Windows 95.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  Header: Declared in Imagehlp.h.
  Library: Use Imagehlp.lib.

> on the other hand, it's not that hard to use BindImageEx
> only if it exists...
>

Thomas




From tim_one at email.msn.com  Fri Sep 29 11:02:38 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 29 Sep 2000 05:02:38 -0400
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
In-Reply-To: <012401c029ea$6cfbc7e0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENKHIAA.tim_one@email.msn.com>

[Thomas Heller]
> According to my docs, BindImageEx may not be included in early
> versions of Win95, but who is using that anyway?

[/F]
> lots of people -- the first version of our PythonWare
> installer didn't run on the original Win95 release, and
> we still get complaints about that.

Indeed, you got one from me <wink>!

> on the other hand, it's not that hard to use BindImageEx
> only if it exists...

I'm *really* going on vacation now, but if BindImageEx makes sense here
(offhand I confess the intended use of it here didn't click for me), MS's
imagehlp.dll is redistributable -- although it appears they split it into
two DLLs for Win2K and made only "the other one" redistributable there
<arghghghgh> ...





From thomas.heller at ion-tof.com  Fri Sep 29 11:15:27 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 29 Sep 2000 11:15:27 +0200
Subject: [Python-Dev] Patch to avoid conflict with older versions of Python.
References: <LNBBLJKPBEHFEDALKOLCCENKHIAA.tim_one@email.msn.com>
Message-ID: <002e01c029f5$d24dbc10$4500a8c0@thomasnb>

> I'm *really* going on vacation now, but if BindImageEx makes sense here
> (offhand I confess the intended use of it here didn't click for me), MS's
> imagehlp.dll is redistributable -- although it appears they split it into
> two DLLs for Win2K and made only "the other one" redistributable there
> <arghghghgh> ...

No need to install it on Win2K (may not even be possible?),
only for Win95.

I just checked: imagehlp.dll is NOT included in Win95b (which I still
use on one computer, but I thought I was in a small minority)

Thomas




From jeremy at beopen.com  Fri Sep 29 16:09:16 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 29 Sep 2000 10:09:16 -0400 (EDT)
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  question...)
In-Reply-To: <045201c029c9$8f49fd10$8119fea9@neil>
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com>
	<045201c029c9$8f49fd10$8119fea9@neil>
Message-ID: <14804.41612.747364.118819@bitdiddle.concentric.net>

>>>>> "NH" == Neil Hodgson <nhodgson at bigpond.net.au> writes:

  NH>    Finds about 130 characters. The only ones I feel are worth
  NH>    worrying about
  NH> are the half, quarters and eighths (0xbc, 0xbd, 0xbe, 0x215b,
  NH> 0x215c, 0x215d, 0x215e) which are commonly used for expressing
  NH> the prices of stocks and commodities in the US. This may be
  NH> rarely used but it is better to have it available than to have
  NH> people coding up their own translation tables.

The US no longer uses fraction to report stock prices.  Example:
    http://business.nytimes.com/market_summary.asp

LEADERS                            Last      Range         Change    
AMERICAN INDL PPTYS REIT  (IND)   14.06  13.56  - 14.06  0.25  / 1.81% 
R G S ENERGY GROUP INC  (RGS)     28.19  27.50  - 28.19  0.50  / 1.81% 
DRESDNER RCM GLBL STRT INC  (DSF)  6.63   6.63  - 6.63   0.06  / 0.95% 
FALCON PRODS INC  (FCP)            9.63   9.63  - 9.88   0.06  / 0.65% 
GENERAL ELEC CO  (GE)             59.00  58.63  - 59.75  0.19  / 0.32% 

Jeremy



From trentm at ActiveState.com  Fri Sep 29 16:56:34 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 29 Sep 2000 07:56:34 -0700
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Sep 28, 2000 at 10:25:00PM -0400
References: <200009270706.AAA21107@slayer.i.sourceforge.net> <20000927003233.C19872@ActiveState.com> <14803.64892.937014.475312@cj42289-a.reston1.va.home.com>
Message-ID: <20000929075634.B15762@ActiveState.com>

On Thu, Sep 28, 2000 at 10:25:00PM -0400, Fred L. Drake, Jr. wrote:
> 
> Trent Mick writes:
>  > I was playing with a different SourceForge project and I screwed up my
>  > CVSROOT (used Python's instead). Sorry SOrry!
> 
>   Well, you blew it.  Don't worry, we'll have you kicked off
> SourceForge in no time!  ;)
>   Well, maybe not.  I've submitted a support request to fix this:
> 
> http://sourceforge.net/support/?func=detailsupport&support_id=106112&group_id=1
> 
> 

Thank you Fred!


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Fri Sep 29 17:00:17 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 29 Sep 2000 08:00:17 -0700
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14804.12649.504962.985774@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Sep 29, 2000 at 02:06:33AM -0400
References: <200009270706.AAA21107@slayer.i.sourceforge.net> <20000927003233.C19872@ActiveState.com> <14804.12649.504962.985774@anthem.concentric.net>
Message-ID: <20000929080017.C15762@ActiveState.com>

On Fri, Sep 29, 2000 at 02:06:33AM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
> 
>     TM> I was playing with a different SourceForge project and I
>     TM> screwed up my CVSROOT (used Python's instead). Sorry SOrry!
> 
>     TM> How do I undo this cleanly? I could 'cvs remove' the
>     TM> README.txt file but that would still leave the top-level
>     TM> 'black/' turd right? Do the SourceForge admin guys have to
>     TM> manually kill the 'black' directory in the repository?
> 
> One a directory's been added, it's nearly impossible to cleanly delete
> it from CVS.  If it's infected people's working directories, you're
> really screwed, because even if the SF admins remove it from the
> repository, it'll be a pain to clean up on the client side.

Hopefully no client machines were infected. People would have to 'cvs co
black' with the Python CVSROOT. I presume people are only doing either 'cvs
co python'or 'cvs co distutils'. ...or is there some sort of 'cvs co *' type
invocation that people could and were using?



> 
> Probably best thing to do is make sure you "cvs rm" everything in the
> directory and then just let "cvs up -P" remove the empty directory.
> Everybody /is/ using -P (and -d) right? :)
>

I didn't know about -P, but I will use it now. For reference for others:

       -P     Prune  (remove)  directories that are empty after being
              updated, on checkout, or update.  Normally, an empty directory
              (one that is void  of revision-con? trolled  files) is left
              alone.  Specifying -P will cause these directories to be
              silently removed from your checked-out sources.  This does not
              remove  the directory  from  the  repository, only from your
              checked out copy.  Note that this option is implied by the -r
              or -D options of checkout and export.


Trent


-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Fri Sep 29 17:12:29 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 29 Sep 2000 11:12:29 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
	<14804.12649.504962.985774@anthem.concentric.net>
	<20000929080017.C15762@ActiveState.com>
Message-ID: <14804.45405.528913.613816@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Hopefully no client machines were infected. People would have
    TM> to 'cvs co black' with the Python CVSROOT. I presume people
    TM> are only doing either 'cvs co python'or 'cvs co
    TM> distutils'. ...or is there some sort of 'cvs co *' type
    TM> invocation that people could and were using?

In fact, I usually only "co -d python python/dist/src" :)  But if you
do a "cvs up -d" at the top-level, I think you'll get the new
directory.  Don't know how many people that'll affect, but if you're
going to wax that the directory, the soon the better!

-Barry



From fdrake at beopen.com  Fri Sep 29 17:21:48 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 29 Sep 2000 11:21:48 -0400 (EDT)
Subject: [Python-Dev] Re: WHOA!!! Screw up on my part: how do I undo this (Re: [Python-checkins] CVS: black - Imported sources)
In-Reply-To: <14804.12649.504962.985774@anthem.concentric.net>
References: <200009270706.AAA21107@slayer.i.sourceforge.net>
	<20000927003233.C19872@ActiveState.com>
	<14804.12649.504962.985774@anthem.concentric.net>
Message-ID: <14804.45964.428895.57625@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > One a directory's been added, it's nearly impossible to cleanly delete
 > it from CVS.  If it's infected people's working directories, you're
 > really screwed, because even if the SF admins remove it from the
 > repository, it'll be a pain to clean up on the client side.

  In general, yes, but since the directory was a separate module (in
CVS terms, "product" in SF terms), there's no way for it to have been
picked up by clients automatically.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Sep 29 18:15:09 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 29 Sep 2000 12:15:09 -0400 (EDT)
Subject: [Python-Dev] codecs question
Message-ID: <14804.49165.894978.144346@cj42289-a.reston1.va.home.com>

  Jeremy was just playing with the xml.sax package, and decided to
print the string returned from parsing "&#251;" (the copyright
symbol).  Sure enough, he got a traceback:

>>> print u'\251'

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
UnicodeError: ASCII encoding error: ordinal not in range(128)

and asked me about it.  I was a little surprised myself.  First, that
anyone would use "print" in a SAX handler to start with, and second,
that it was so painful.
  Now, I can chalk this up to not using a reasonable stdout that
understands that Unicode needs to be translated to Latin-1 given my
font selection.  So I looked at the codecs module to provide a usable
output stream.  The EncodedFile class provides a nice wrapper around
another file object, and supports both encoding both ways.
  Unfortunately, I can't see what "encoding" I should use if I want to
read & write Unicode string objects to it.  ;(  (Marc-Andre, please
tell me I've missed something!)  I also don't think I
can use it with "print", extended or otherwise.
  The PRINT_ITEM opcode calls PyFile_WriteObject() with whatever it
gets, so that's fine.  Then it converts the object using
PyObject_Str() or PyObject_Repr().  For Unicode objects, the tp_str
handler attempts conversion to the default encoding ("ascii" in this
case), and raises the traceback we see above.
  Perhaps a little extra work is needed in PyFile_WriteObject() to
allow Unicode objects to pass through if the file is merely file-like,
and let the next layer handle the conversion?  This would probably
break code, and therefore not be acceptable.
  On the other hand, it's annoying that I can't create a file-object
that takes Unicode strings from "print", and doesn't seem intuitive.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From loewis at informatik.hu-berlin.de  Fri Sep 29 19:16:25 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Fri, 29 Sep 2000 19:16:25 +0200 (MET DST)
Subject: [Python-Dev] codecs question 
Message-ID: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>

>   Unfortunately, I can't see what "encoding" I should use if I want
>   to read & write Unicode string objects to it.  ;( (Marc-Andre,
>   please tell me I've missed something!)

It depends on the output you want to have. One option would be

s=codecs.lookup('unicode-escape')[3](sys.stdout)

Then, s.write(u'\251') prints a string in Python quoting notation.

Unfortunately,

print >>s,u'\251'

won't work, since print *first* tries to convert the argument to a
string, and then prints the string onto the stream.

>  On the other hand, it's annoying that I can't create a file-object
> that takes Unicode strings from "print", and doesn't seem intuitive.

Since you are asking for a hack :-) How about having an additional
letter of 'u' in the "mode" attribute of a file object?

Then, print would be

def print(stream,string):
  if type(string) == UnicodeType:
    if 'u' in stream.mode:
      stream.write(string)
      return
  stream.write(str(string))

The Stream readers and writers would then need to have a mode or 'ru'
or 'wu', respectively.

Any other protocol to signal unicode-awareness in a stream might do as
well.

Regards,
Martin

P.S. Is there some function to retrieve the UCN names from ucnhash.c?



From mal at lemburg.com  Fri Sep 29 20:08:26 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 20:08:26 +0200
Subject: [Python-Dev] codecs question
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>
Message-ID: <39D4DA99.53338FA5@lemburg.com>

Martin von Loewis wrote:
> 
> P.S. Is there some function to retrieve the UCN names from ucnhash.c?

No, there's not even a way to extract those names... a table is
there (_Py_UnicodeCharacterName in ucnhash.c), but no access
function.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 20:09:13 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 20:09:13 +0200
Subject: [Python-Dev] codecs question
References: <14804.49165.894978.144346@cj42289-a.reston1.va.home.com>
Message-ID: <39D4DAC9.7F8E1CE5@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   Jeremy was just playing with the xml.sax package, and decided to
> print the string returned from parsing "&#251;" (the copyright
> symbol).  Sure enough, he got a traceback:
> 
> >>> print u'\251'
> 
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> UnicodeError: ASCII encoding error: ordinal not in range(128)
> 
> and asked me about it.  I was a little surprised myself.  First, that
> anyone would use "print" in a SAX handler to start with, and second,
> that it was so painful.

That's a consequence of defaulting to ASCII for all platforms
instead of choosing the encoding depending on the current locale
(the site.py file has code which does the latter).

>   Now, I can chalk this up to not using a reasonable stdout that
> understands that Unicode needs to be translated to Latin-1 given my
> font selection.  So I looked at the codecs module to provide a usable
> output stream.  The EncodedFile class provides a nice wrapper around
> another file object, and supports both encoding both ways.
>   Unfortunately, I can't see what "encoding" I should use if I want to
> read & write Unicode string objects to it.  ;(  (Marc-Andre, please
> tell me I've missed something!) 

That depends on what you want to see as output ;-) E.g. in
Europe you'd use Latin-1 (which also contains the copyright
symbol).

> I also don't think I
> can use it with "print", extended or otherwise.
>   The PRINT_ITEM opcode calls PyFile_WriteObject() with whatever it
> gets, so that's fine.  Then it converts the object using
> PyObject_Str() or PyObject_Repr().  For Unicode objects, the tp_str
> handler attempts conversion to the default encoding ("ascii" in this
> case), and raises the traceback we see above.

Right.

>   Perhaps a little extra work is needed in PyFile_WriteObject() to
> allow Unicode objects to pass through if the file is merely file-like,
> and let the next layer handle the conversion?  This would probably
> break code, and therefore not be acceptable.
>   On the other hand, it's annoying that I can't create a file-object
> that takes Unicode strings from "print", and doesn't seem intuitive.

The problem is that the .write() method of a file-like object
will most probably only work with string objects. If
it uses "s#" or "t#" it's lucky, because then the argument
parser will apply the necessariy magic to the input object
to get out some object ready for writing to the file. Otherwise
it will simply fail with a type error.

Simply allowing PyObject_Str() to return Unicode objects too
is not an alternative either since that would certainly break
tons of code.

Implementing tp_print for Unicode wouldn't get us anything
either.

Perhaps we'll need to fix PyFile_WriteObject() to special
case Unicode and allow calling .write() with an Unicode
object and fix those .write() methods which don't do the
right thing ?!

This is a project for 2.1. In 2.0 only explicitly calling
the .write() method will do the trick and EncodedFile()
helps with this.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From effbot at telia.com  Fri Sep 29 20:28:38 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 29 Sep 2000 20:28:38 +0200
Subject: [Python-Dev] codecs question 
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de>
Message-ID: <000001c02a47$f3f5f100$766940d5@hagrid>

> P.S. Is there some function to retrieve the UCN names from ucnhash.c?

the "unicodenames" patch (which replaces ucnhash) includes this
functionality -- but with a little distance, I think it's better to add
it to the unicodedata module.

(it's included in the step 4 patch, soon to be posted to a patch
manager near you...)

</F>




From loewis at informatik.hu-berlin.de  Sat Sep 30 11:47:01 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 30 Sep 2000 11:47:01 +0200 (MET DST)
Subject: [Python-Dev] codecs question
In-Reply-To: <000001c02a47$f3f5f100$766940d5@hagrid> (effbot@telia.com)
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de> <000001c02a47$f3f5f100$766940d5@hagrid>
Message-ID: <200009300947.LAA13652@pandora.informatik.hu-berlin.de>

> the "unicodenames" patch (which replaces ucnhash) includes this
> functionality -- but with a little distance, I think it's better to add
> it to the unicodedata module.
> 
> (it's included in the step 4 patch, soon to be posted to a patch
> manager near you...)

Sounds good. Is there any chance to use this in codecs, then?
I'm thinking of

>>> print u"\N{COPYRIGHT SIGN}".encode("ascii-ucn")
\N{COPYRIGHT SIGN}
>>> print u"\N{COPYRIGHT SIGN}".encode("latin-1-ucn")
?

Regards,
Martin

P.S. Some people will recognize this as the disguised question 'how
can I convert non-convertable characters using the XML entity
notation?'



From mal at lemburg.com  Sat Sep 30 12:21:43 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 30 Sep 2000 12:21:43 +0200
Subject: [Python-Dev] codecs question
References: <200009291716.TAA05996@pandora.informatik.hu-berlin.de> <000001c02a47$f3f5f100$766940d5@hagrid> <200009300947.LAA13652@pandora.informatik.hu-berlin.de>
Message-ID: <39D5BEB7.F4045E8B@lemburg.com>

Martin von Loewis wrote:
> 
> > the "unicodenames" patch (which replaces ucnhash) includes this
> > functionality -- but with a little distance, I think it's better to add
> > it to the unicodedata module.
> >
> > (it's included in the step 4 patch, soon to be posted to a patch
> > manager near you...)
> 
> Sounds good. Is there any chance to use this in codecs, then?

If you need speed, you'd have to write a C codec for this
and yes: the ucnhash module does import a C API using a
PyCObject which you can use to access the static C data
table.

Don't know if Fredrik's version will also support this.

I think a C function as access method would be more generic
than the current direct C table access.

> I'm thinking of
> 
> >>> print u"\N{COPYRIGHT SIGN}".encode("ascii-ucn")
> \N{COPYRIGHT SIGN}
> >>> print u"\N{COPYRIGHT SIGN}".encode("latin-1-ucn")
> ?
> 
> Regards,
> Martin
> 
> P.S. Some people will recognize this as the disguised question 'how
> can I convert non-convertable characters using the XML entity
> notation?'

If you just need a single encoding, e.g. Latin-1, simply clone
the codec (it's coded in unicodeobject.c) and add the XML entity
processing.

Unfortunately, reusing the existing codecs is not too
efficient: the reason is that there is no error handling
which would permit you to say "encode as far as you can
and then return the encoded data plus a position marker
in the input stream/data".

Perhaps we should add a new standard error handling
scheme "break" which simply stops encoding/decoding
whenever an error occurrs ?!

This should then allow reusing existing codecs by
processing the input in slices.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Fri Sep 29 10:15:18 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 29 Sep 2000 10:15:18 +0200
Subject: [Python-Dev] unicodedata.numeric (was RE: stupid floating point  
 question...)
References: <LNBBLJKPBEHFEDALKOLCGELPHIAA.tim_one@email.msn.com> <045201c029c9$8f49fd10$8119fea9@neil>
Message-ID: <39D44F96.D4342ADB@lemburg.com>

Neil Hodgson wrote:
> 
>    The 0x302* 'Hangzhou' numerals look like they should be classified as
> digits.

Can't change the Unicode 3.0 database... so even though this might
be useful in some contexts lets stick to the standard.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/




From guido at python.org  Sat Sep 30 22:56:18 2000
From: guido at python.org (Guido van Rossum)
Date: Sat, 30 Sep 2000 15:56:18 -0500
Subject: [Python-Dev] Changes in semantics to str()?
Message-ID: <200009302056.PAA14718@cj20424-a.reston1.va.home.com>

When we changed floats to behave different on repr() than on str(), we
briefly discussed changes to the container objects as well, but
nothing came of it.

Currently, str() of a tuple, list or dictionary is the same as repr()
of those objects.  This is not very consistent.  For example, when we
have a float like 1.1 which can't be represented exactly, str() yields
"1.1" but repr() yields "1.1000000000000001".  But if we place the
same number in a list, it doesn't matter which function we use: we
always get "[1.1000000000000001]".

Below I have included changes to listobject.c, tupleobject.c and
dictobject.c that fix this.  The fixes change the print and str()
callbacks for these objects to use PyObject_Str() on the contained
items -- except if the item is a string or Unicode string.  I made
these exceptions because I don't like the idea of str(["abc"])
yielding [abc] -- I'm too used to the idea of seeing ['abc'] here.
And str() of a Unicode object fails when it contains non-ASCII
characters, so that's no good either -- it would break too much code.

Is it too late to check this in?  Another negative consequence would
be that for user-defined or 3rd party extension objects that have
different repr() and str(), like NumPy arrays, it might break some
code -- but I think this is not very likely.

--Guido van Rossum (home page: http://www.python.org/~guido/)

*** dictobject.c	2000/09/01 23:29:27	2.65
--- dictobject.c	2000/09/30 16:03:04
***************
*** 594,599 ****
--- 594,601 ----
  	register int i;
  	register int any;
  	register dictentry *ep;
+ 	PyObject *item;
+ 	int itemflags;
  
  	i = Py_ReprEnter((PyObject*)mp);
  	if (i != 0) {
***************
*** 609,620 ****
  		if (ep->me_value != NULL) {
  			if (any++ > 0)
  				fprintf(fp, ", ");
! 			if (PyObject_Print((PyObject *)ep->me_key, fp, 0)!=0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
  			fprintf(fp, ": ");
! 			if (PyObject_Print(ep->me_value, fp, 0) != 0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
--- 611,630 ----
  		if (ep->me_value != NULL) {
  			if (any++ > 0)
  				fprintf(fp, ", ");
! 			item = (PyObject *)ep->me_key;
! 			itemflags = flags;
! 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 				itemflags = 0;
! 			if (PyObject_Print(item, fp, itemflags)!=0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
  			fprintf(fp, ": ");
! 			item = ep->me_value;
! 			itemflags = flags;
! 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 				itemflags = 0;
! 			if (PyObject_Print(item, fp, itemflags) != 0) {
  				Py_ReprLeave((PyObject*)mp);
  				return -1;
  			}
***************
*** 661,666 ****
--- 671,722 ----
  	return v;
  }
  
+ static PyObject *
+ dict_str(dictobject *mp)
+ {
+ 	auto PyObject *v;
+ 	PyObject *sepa, *colon, *item, *repr;
+ 	register int i;
+ 	register int any;
+ 	register dictentry *ep;
+ 
+ 	i = Py_ReprEnter((PyObject*)mp);
+ 	if (i != 0) {
+ 		if (i > 0)
+ 			return PyString_FromString("{...}");
+ 		return NULL;
+ 	}
+ 
+ 	v = PyString_FromString("{");
+ 	sepa = PyString_FromString(", ");
+ 	colon = PyString_FromString(": ");
+ 	any = 0;
+ 	for (i = 0, ep = mp->ma_table; i < mp->ma_size && v; i++, ep++) {
+ 		if (ep->me_value != NULL) {
+ 			if (any++)
+ 				PyString_Concat(&v, sepa);
+ 			item = ep->me_key;
+ 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 				repr = PyObject_Repr(item);
+ 			else
+ 				repr = PyObject_Str(item);
+ 			PyString_ConcatAndDel(&v, repr);
+ 			PyString_Concat(&v, colon);
+ 			item = ep->me_value;
+ 			if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 				repr = PyObject_Repr(item);
+ 			else
+ 				repr = PyObject_Str(item);
+ 			PyString_ConcatAndDel(&v, repr);
+ 		}
+ 	}
+ 	PyString_ConcatAndDel(&v, PyString_FromString("}"));
+ 	Py_ReprLeave((PyObject*)mp);
+ 	Py_XDECREF(sepa);
+ 	Py_XDECREF(colon);
+ 	return v;
+ }
+ 
  static int
  dict_length(dictobject *mp)
  {
***************
*** 1193,1199 ****
  	&dict_as_mapping,	/*tp_as_mapping*/
  	0,		/* tp_hash */
  	0,		/* tp_call */
! 	0,		/* tp_str */
  	0,		/* tp_getattro */
  	0,		/* tp_setattro */
  	0,		/* tp_as_buffer */
--- 1249,1255 ----
  	&dict_as_mapping,	/*tp_as_mapping*/
  	0,		/* tp_hash */
  	0,		/* tp_call */
! 	(reprfunc)dict_str, /* tp_str */
  	0,		/* tp_getattro */
  	0,		/* tp_setattro */
  	0,		/* tp_as_buffer */
Index: listobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/listobject.c,v
retrieving revision 2.88
diff -c -r2.88 listobject.c
*** listobject.c	2000/09/26 05:46:01	2.88
--- listobject.c	2000/09/30 16:03:04
***************
*** 197,203 ****
  static int
  list_print(PyListObject *op, FILE *fp, int flags)
  {
! 	int i;
  
  	i = Py_ReprEnter((PyObject*)op);
  	if (i != 0) {
--- 197,204 ----
  static int
  list_print(PyListObject *op, FILE *fp, int flags)
  {
! 	int i, itemflags;
! 	PyObject *item;
  
  	i = Py_ReprEnter((PyObject*)op);
  	if (i != 0) {
***************
*** 210,216 ****
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		if (PyObject_Print(op->ob_item[i], fp, 0) != 0) {
  			Py_ReprLeave((PyObject *)op);
  			return -1;
  		}
--- 211,221 ----
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		item = op->ob_item[i];
! 		itemflags = flags;
! 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 			itemflags = 0;
! 		if (PyObject_Print(item, fp, itemflags) != 0) {
  			Py_ReprLeave((PyObject *)op);
  			return -1;
  		}
***************
*** 245,250 ****
--- 250,285 ----
  	return s;
  }
  
+ static PyObject *
+ list_str(PyListObject *v)
+ {
+ 	PyObject *s, *comma, *item, *repr;
+ 	int i;
+ 
+ 	i = Py_ReprEnter((PyObject*)v);
+ 	if (i != 0) {
+ 		if (i > 0)
+ 			return PyString_FromString("[...]");
+ 		return NULL;
+ 	}
+ 	s = PyString_FromString("[");
+ 	comma = PyString_FromString(", ");
+ 	for (i = 0; i < v->ob_size && s != NULL; i++) {
+ 		if (i > 0)
+ 			PyString_Concat(&s, comma);
+ 		item = v->ob_item[i];
+ 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 			repr = PyObject_Repr(item);
+ 		else
+ 			repr = PyObject_Str(item);
+ 		PyString_ConcatAndDel(&s, repr);
+ 	}
+ 	Py_XDECREF(comma);
+ 	PyString_ConcatAndDel(&s, PyString_FromString("]"));
+ 	Py_ReprLeave((PyObject *)v);
+ 	return s;
+ }
+ 
  static int
  list_compare(PyListObject *v, PyListObject *w)
  {
***************
*** 1484,1490 ****
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 1519,1525 ----
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)list_str, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
***************
*** 1561,1567 ****
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 1596,1602 ----
  	0,		/*tp_as_mapping*/
  	0,		/*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)list_str, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
Index: tupleobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/tupleobject.c,v
retrieving revision 2.46
diff -c -r2.46 tupleobject.c
*** tupleobject.c	2000/09/15 07:32:39	2.46
--- tupleobject.c	2000/09/30 16:03:04
***************
*** 167,178 ****
  static int
  tupleprint(PyTupleObject *op, FILE *fp, int flags)
  {
! 	int i;
  	fprintf(fp, "(");
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		if (PyObject_Print(op->ob_item[i], fp, 0) != 0)
  			return -1;
  	}
  	if (op->ob_size == 1)
--- 167,183 ----
  static int
  tupleprint(PyTupleObject *op, FILE *fp, int flags)
  {
! 	int i, itemflags;
! 	PyObject *item;
  	fprintf(fp, "(");
  	for (i = 0; i < op->ob_size; i++) {
  		if (i > 0)
  			fprintf(fp, ", ");
! 		item = op->ob_item[i];
! 		itemflags = flags;
! 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
! 			itemflags = 0;
! 		if (PyObject_Print(item, fp, itemflags) != 0)
  			return -1;
  	}
  	if (op->ob_size == 1)
***************
*** 200,205 ****
--- 205,234 ----
  	return s;
  }
  
+ static PyObject *
+ tuplestr(PyTupleObject *v)
+ {
+ 	PyObject *s, *comma, *item, *repr;
+ 	int i;
+ 	s = PyString_FromString("(");
+ 	comma = PyString_FromString(", ");
+ 	for (i = 0; i < v->ob_size && s != NULL; i++) {
+ 		if (i > 0)
+ 			PyString_Concat(&s, comma);
+ 		item = v->ob_item[i];
+ 		if (item == NULL || PyString_Check(item) || PyUnicode_Check(item))
+ 			repr = PyObject_Repr(item);
+ 		else
+ 			repr = PyObject_Str(item);
+ 		PyString_ConcatAndDel(&s, repr);
+ 	}
+ 	Py_DECREF(comma);
+ 	if (v->ob_size == 1)
+ 		PyString_ConcatAndDel(&s, PyString_FromString(","));
+ 	PyString_ConcatAndDel(&s, PyString_FromString(")"));
+ 	return s;
+ }
+ 
  static int
  tuplecompare(register PyTupleObject *v, register PyTupleObject *w)
  {
***************
*** 412,418 ****
  	0,		/*tp_as_mapping*/
  	(hashfunc)tuplehash, /*tp_hash*/
  	0,		/*tp_call*/
! 	0,		/*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/
--- 441,447 ----
  	0,		/*tp_as_mapping*/
  	(hashfunc)tuplehash, /*tp_hash*/
  	0,		/*tp_call*/
! 	(reprfunc)tuplestr, /*tp_str*/
  	0,		/*tp_getattro*/
  	0,		/*tp_setattro*/
  	0,		/*tp_as_buffer*/